国产一级a片免费看高清,亚洲熟女中文字幕在线视频,黄三级高清在线播放,免费黄色视频在线看

打開(kāi)APP
userphoto
未登錄

開(kāi)通VIP,暢享免費(fèi)電子書(shū)等14項(xiàng)超值服

開(kāi)通VIP
Swift integration

Swift integration

from: https://savanna.readthedocs.org

Hadoop and Swift integration is the essential continuation of Hadoop&OpenStack marriage. There were two steps to achieve this:

Swift patching

If you are still using Folsom you need to follow these steps:

  • Go to proxy server and find proxy-server.conf file. Go to [pipeline-main] section and insert a new filter BEFORE 'authtoken’ filter. The name of your new filter is not very important, you will use it only for configuration. E.g. let it be ${list_endpoints}:
[pipeline:main]pipeline = catch_errors healthcheck cache ratelimit swift3 s3token list_endpoints authtoken keystone proxy-server
The next thing you need to do here is to add the description of new filter:
[filter:list_endpoints]use = egg:swift#${list_endpoints}# list_endpoints_path = /endpoints/
list_endpoints_path is not mandatory and is “endpoints” by default. This param is used for http-request construction. See details below.
  • Go to entry_points.txt in egg-info. For swift-1.7.4 it may be found in /usr/lib/python2.7/dist-packages/swift-1.7.4.egg-info/entry_points.txt. Add the following description to[paste.filter_factory] section:
${list_endpoints} = swift.common.middleware.list_endpoints:filter_factory
  • And the last step: put list_endpoints.py to /python2.7/dist-packages/swift/common/middleware/.

Is Swift was patched successfully?

You may check if patching is successful just sending the following http requests:

http://${proxy}:8080/endpoints/${account}/${container}/${object}http://${proxy}:8080/endpoints/${account}/${container}http://${proxy}:8080/endpoints/${account}

You don’t need any additional headers here and authorization (see previous section: filter ${list_endpoints} is before 'authtoken’ filter). The response will contain ip’s of all swift nodes which contains the corresponding object.

Hadoop patching

You may build jar file by yourself choosing the latest patch fromhttps://issues.apache.org/jira/browse/HADOOP-8545. Or you may get the latest one from repositoryhttps://github.com/stackforge/savanna-extra/blob/master/hadoop-swift/hadoop-swift-latest.jar You need to put this file to hadoop libraries (e.g. /usr/lib/share/hadoop/lib) into each job-tracker and task-tracker node in cluster. The main step in this section is to configure core-site.xml file on each of this node.

Hadoop configurations

All of configs may be rewritten by Hadoop-job or set in core-site.xml using this template:

<property>    <name>${name} + ${config}</name>    <value>${value}</value>    <description>${not mandatory description}</description></property>

There are two types of configs here:

  1. General. The ${name} in this case equals to fs.swift. Here is the list of ${config}:

    • .impl - Swift FileSystem implementation. The ${value} isorg.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem
    • .connect.timeout - timeout for all connections by default: 15000
    • .socket.timeout - how long the connection waits for responses from servers. by default: 60000
    • .connect.retry.count - connection retry count for all connections. by default: 3
    • .connect.throttle.delay - delay in millis between bulk (delete, rename, copy operations). by default: 0
    • .blocksize - blocksize for filesystem. By default: 32Mb
    • .partsize - the partition size for uploads. By default: 4608*1024Kb
    • .requestsize - request size for reads in KB. By default: 64Kb
  2. Provider-specific. Patch for Hadoop supports different cloud providers. The ${name} in this case equals to fs.swift.service.${provider}.

    Here is the list of ${config}:

    • .auth.url - authorization URL
    • .tenant
    • .username
    • .password
    • .http.port
    • .https.port
    • .region - Swift region is used when cloud has more than one Swift installation. If region param is not set first region from Keystone endpoint list will be chosen. If region param not found exception will be thrown.
    • .location-aware - turn On location awareness. Is false by default
    • .apikey
    • .public

Example

By this point Swift and Hadoop is ready for use. All configs in hadoop is ok.

In example below provider’s name is savanna. So let’s copy one object to another in one swift container and account. E.g. /dev/integration/temp to /dev/integration/temp1. Will use distcp for this purpose:http://hadoop.apache.org/docs/r0.19.0/distcp.html

How to write swift path? In our case it will look as follows: swift://integration.savanna/temp. So the template is: swift://${container}.${provider}/${object}. We don’t need to point out the account because it will be automatically determined from tenant name from configs. Actually, account=tenant.

Let’s run the job:

$ hadoop distcp -D fs.swift.service.savanna.username=admin \ -D fs.swift.service.savanna.password=swordfish \ swift://integration.savanna/temp swift://integration.savanna/temp1

After that just check if temp1 is created.

Limitations

Note: Please note that container name should be a valid URI.

本站僅提供存儲(chǔ)服務(wù),所有內(nèi)容均由用戶發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請(qǐng)點(diǎn)擊舉報(bào)。
打開(kāi)APP,閱讀全文并永久保存 查看更多類似文章
猜你喜歡
類似文章
S
customer-information-sheet-for-inward-payments-to-hong-kong(1).pdf
Swift錯(cuò)誤
Taylor Swift
Swift 方法(十)
【1080P 60幀】Taylor Swift
更多類似文章 >>
生活服務(wù)
分享 收藏 導(dǎo)長(zhǎng)圖 關(guān)注 下載文章
綁定賬號(hào)成功
后續(xù)可登錄賬號(hào)暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點(diǎn)擊這里聯(lián)系客服!

聯(lián)系客服