历史记录服务器信息资源提供有关历史记录服务器的总体信息。
| 项目 | 数据类型 | 描述 |
|---|---|---|
| startsOn | 长 | 历史记录服务器启动的时间(从纪元开始以毫秒为单位) |
| hadoopVersion | 串 | Hadoop通用版本 |
| hadoopBuildVersion | 串 | 具有构建版本,用户和校验和的Hadoop通用构建字符串 |
| hadoopVersionBuiltOn | 串 | 构建hadoop common的时间戳 |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / info
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
{
“ historyInfo”:{
“ startedOn”:1353512830963,
“ hadoopVersionBuiltOn”:“ UTC 2012年1月11日星期三21:18:36”,
“ hadoopBuildVersion”:“来自user1源校验和bb6e554c6d50b0397d826081017437a7的1230253中的0.23.1-SNAPSHOT”,
“ hadoopVersion”:“ 0.23.1-SNAPSHOT”
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / info 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:330 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?> <historyInfo> <startedOn> 1353512830963 </ startedOn> <hadoopVersion> 0.23.1-快照</ hadoopVersion> <hadoopBuildVersion>来自user1源校验和bb6e554c6d50b0397d826081017437a7的1230253的0.23.1-SNAPSHOT </ hadoopBuildVersion> <hadoopVersionBuiltOn> 2012年1月11日星期三UTC时间</ hadoopVersionBuiltOn> </ historyInfo>
以下资源列表适用于MapReduce。
作业资源提供了已完成的MapReduce作业的列表。它当前不返回完整的参数列表
可以指定多个参数。开始时间和结束时间都有一个begin和end参数,以允许您指定范围。例如,可以请求在2011年12月19日凌晨1:00和下午2:00之间开始的所有作业,其中startedTimeBegin = 1324256400&startedTimeEnd = 1324303200。如果未指定Begin参数,则默认为0,如果未指定End参数,则默认为infinity。
当您请求作业列表时,信息将作为作业对象数组返回。另请参见Job API,以获取作业对象的语法。除此之外,这是一项完整工作的一部分。仅返回startTime,finishTime,id,名称,队列,用户,状态,mapsTotal,mapsCompleted,reduceTotal和reducesCompleted。
| 项目 | 数据类型 | 描述 |
|---|---|---|
| 工作 | 作业对象数组(json)/零个或多个作业对象(XML) | 作业对象的集合 |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
{
“职位” : {
“工作”:[
{
“ submitTime”:1326381344449,
“ state”:“ Succeeded”,
“ user”:“ user1”,
“ reducesTotal”:1
“ mapsCompleted”:1
“ startTime”:1326381344489,
“ id”:“ job_1326381300833_1_1”,
“ name”:“字数”,
“ reducesCompleted”:1,
“ mapsTotal”:1
“ queue”:“默认”,
“ finishTime”:1326381356010
},
{
“ submitTime”:1326381446500
“ state”:“ Succeeded”,
“ user”:“ user1”,
“ reducesTotal”:1
“ mapsCompleted”:1
“ startTime”:1326381446529,
“ id”:“ job_1326381300833_2_2”,
“ name”:“睡眠工作”,
“ reducesCompleted”:1,
“ mapsTotal”:1
“ queue”:“默认”,
“ finishTime”:1326381582106
}
]
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:1922 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?>
<职位>
<工作>
<submitTime> 1326381344449 </ submitTime>
<startTime> 1326381344489 </ startTime>
<finishTime> 1326381356010 </ finishTime>
<id> job_1326381300833_1_1 </ id>
<name>字数</ name>
<queue>默认</ queue>
<user> user1 </ user>
<state>成功</ state>
<mapsTotal> 1 </ mapsTotal>
<mapsCompleted> 1 </ mapsCompleted>
<reducesTotal> 1 </ reducesTotal>
<reducesCompleted> 1 </ reducesCompleted>
</ job>
<工作>
<submitTime> 1326381446500 </ submitTime>
<startTime> 1326381446529 </ startTime>
<finishTime> 1326381582106 </ finishTime>
<id> job_1326381300833_2_2 </ id>
<name>睡眠作业</ name>
<queue>默认</ queue>
<user> user1 </ user>
<state>成功</ state>
<mapsTotal> 1 </ mapsTotal>
<mapsCompleted> 1 </ mapsCompleted>
<reducesTotal> 1 </ reducesTotal>
<reducesCompleted> 1 </ reducesCompleted>
</ job>
</ jobs>
作业资源包含有关由Jobid标识的特定作业的信息。
| 项目 | 数据类型 | 描述 |
|---|---|---|
| ID | 串 | 工作编号 |
| 名称 | 串 | 工作名称 |
| 队列 | 串 | 作业提交到的队列 |
| 用户 | 串 | 用户名 |
| 州 | 串 | 作业状态-有效值为:NEW,INITED,RUNNING,Succeeded,FAILED,KILL_WAIT,KILLED,ERROR |
| 诊断 | 串 | 诊断消息 |
| SubmitTime | 长 | 作业提交的时间(自纪元以来以毫秒为单位) |
| 开始时间 | 长 | 作业开始的时间(从纪元开始以毫秒为单位) |
| finishTime | 长 | 作业完成的时间(从开始到现在的毫秒数) |
| mapsTotal | 整型 | 地图总数 |
| mapsCompleted | 整型 | 完成的地图数 |
| reduceTotal | 整型 | 减少总数 |
| reduceCompleted | 整型 | 完成数量减少 |
| 超级 | 布尔值 | 指示该作业是否为超级作业-完全在应用程序主服务器中运行 |
| avgMapTime | 长 | 映射任务的平均时间(毫秒) |
| avgReduceTime | 长 | 平均减少时间(毫秒) |
| avgShuffleTime | 长 | 随机播放的平均时间(毫秒) |
| avgMergeTime | 长 | 合并的平均时间(毫秒) |
| failedReduce尝试 | 整型 | 减少尝试失败的次数 |
| 减少尝试 | 整型 | 减少尝试的失败次数 |
| 成功减少尝试 | 整型 | 成功减少尝试的次数 |
| failedMapAttempts | 整型 | 失败的映射尝试次数 |
| KilledMapAttempts | 整型 | 被杀死的地图尝试次数 |
| successMapAttempts | 整型 | 成功的地图尝试次数 |
| ACL | acls(json)/零个或多个acls对象(xml)的数组 | ACLS对象的集合 |
| 项目 | 数据类型 | 描述 |
|---|---|---|
| 值 | 串 | ACL值 |
| 名称 | 串 | ACL名称 |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 服务器:码头(6.1.26) 内容长度:720
响应主体:
{
“工作”:{
“ submitTime”:1326381446500,
“ avgReduceTime”:124961,
“ failedReduceAttempts”:0,
“ state”:“ Succeeded”,
“ successfulReduceAttempts”:1,
“ acls”:[
{
“ value”:“”,
“名称”:“ mapreduce.job.acl-modify-job”
},
{
“ value”:“”,
“名称”:“ mapreduce.job.acl-view-job”
}
],
“ user”:“ user1”,
“ reducesTotal”:1
“ mapsCompleted”:1
“ startTime”:1326381446529,
“ id”:“ job_1326381300833_2_2”,
“ avgMapTime”:2638,
“ successfulMapAttempts”:1
“ name”:“睡眠工作”,
“ avgShuffleTime”:2540,
“ reducesCompleted”:1,
“诊断”:“”,
“ failedMapAttempts”:0,
“ avgMergeTime”:2589,
“ killedReduceAttempts”:0,
“ mapsTotal”:1
“ queue”:“默认”,
“ uberized”:错误,
“ killedMapAttempts”:0,
“ finishTime”:1326381582106
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:983 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?>
<工作>
<submitTime> 1326381446500 </ submitTime>
<startTime> 1326381446529 </ startTime>
<finishTime> 1326381582106 </ finishTime>
<id> job_1326381300833_2_2 </ id>
<name>睡眠作业</ name>
<queue>默认</ queue>
<user> user1 </ user>
<state>成功</ state>
<mapsTotal> 1 </ mapsTotal>
<mapsCompleted> 1 </ mapsCompleted>
<reducesTotal> 1 </ reducesTotal>
<reducesCompleted> 1 </ reducesCompleted>
<uberized> false </ uberized>
<diagnostics />
<avgMapTime> 2638 </ avgMapTime>
<avgReduceTime> 124961 </ avgReduceTime>
<avgShuffleTime> 2540 </ avgShuffleTime>
<avgMergeTime> 2589 </ avgMergeTime>
<failedReduceAttempts> 0 </ failedReduceAttempts>
<killedReduceAttempts> 0 </ killedReduceAttempts>
<successfulReduceAttempts> 1 </ successfulReduceAttempts>
<failedMapAttempts> 0 </ failedMapAttempts>
<killedMapAttempts> 0 </ killedMapAttempts>
<successfulMapAttempts> 1 </ successfulMapAttempts>
<acls>
<name> mapreduce.job.acl-modify-job </ name>
<value> </ value>
</ acls>
<acls>
<name> mapreduce.job.acl-view-job </ name>
<value> </ value>
</ acls>
</ job>
使用作业尝试API,您可以获得代表作业尝试的资源集合。在此资源上运行GET操作时,您将获得一个Job Attempt对象的集合。
当您请求作业尝试列表时,该信息将作为作业尝试对象数组返回。
工作尝试:
| 项目 | 数据类型 | 描述 |
|---|---|---|
| 工作尝试 | 作业尝试对象数组(JSON)/零个或多个作业尝试对象(XML) | 作业尝试对象的集合 |
| 项目 | 数据类型 | 描述 |
|---|---|---|
| ID | 串 | 工作尝试编号 |
| nodeId | 串 | 尝试运行的节点的节点ID |
| nodeHttpAddress | 串 | 尝试运行的节点的节点http地址 |
| logsLink | 串 | 作业尝试日志的http链接 |
| containerId | 串 | 尝试作业的容器的ID |
| 开始时间 | 长 | 尝试的开始时间(自纪元以来以毫秒为单位) |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / jobattempts
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
{
“ jobAttempts”:{
“ jobAttempt”:[
{
“ nodeId”:“ host.domain.com:8041”,
“ nodeHttpAddress”:“ host.domain.com:8042”,
“ startTime”:1326381444693,
“ id”:1
“ logsLink”:“ http://host.domain.com:19888/jobhistory/logs/host.domain.com:8041/container_1326381300833_0002_01_000001/job_1326381300833_2_2/user1”,
“ containerId”:“ container_1326381300833_0002_01_000001”
}
]
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / jobattmpts 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:575 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?>
<jobAttempts>
<jobAttempt>
<nodeHttpAddress> host.domain.com:8042 </ nodeHttpAddress>
<nodeId> host.domain.com:8041 </ nodeId>
<id> 1 </ id>
<startTime> 1326381444693 </ startTime>
<containerId> container_1326381300833_0002_01_000001 </ containerId>
<logsLink> http://host.domain.com:19888/jobhistory/logs/host.domain.com:8041/container_1326381300833_0002_01_000001/job_1326381300833_2_2/user1 </ logsLink>
</ jobAttempt>
</ jobAttempts>
使用作业计数器API,您可以反对代表该作业的计数器的资源集合。
| 项目 | 数据类型 | 描述 |
|---|---|---|
| ID | 串 | 工作编号 |
| 柜台集团 | counterGroup对象的数组(JSON)/零个或多个counterGroup对象的XML | 计数器组对象的集合 |
| 项目 | 数据类型 | 描述 |
|---|---|---|
| 名称 | 串 | 柜台名称 |
| reduceCounterValue | 长 | 减少任务的计数器值 |
| mapCounterValue | 长 | 地图任务的对价 |
| totalCounterValue | 长 | 所有任务的对价 |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / counters
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
{
“ jobCounters”:{
“ id”:“ job_1326381300833_2_2”,
“ counterGroup”:[
{
“ counterGroupName”:“随机播放错误”,
“柜台”:[
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ BAD_ID”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“连接”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ IO_ERROR”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ WRONG_LENGTH”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ WRONG_MAP”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ WRONG_REDUCE”
}
]
},
{
“ counterGroupName”:“ org.apache.hadoop.mapreduce.FileSystemCounter”,
“柜台”:[
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:2483,
“名称”:“ FILE_BYTES_READ”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:108525,
“名称”:“ FILE_BYTES_WRITTEN”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ FILE_READ_OPS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ FILE_LARGE_READ_OPS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ FILE_WRITE_OPS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:48,
“名称”:“ HDFS_BYTES_READ”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ HDFS_BYTES_WRITTEN”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:1
“名称”:“ HDFS_READ_OPS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ HDFS_LARGE_READ_OPS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ HDFS_WRITE_OPS”
}
]
},
{
“ counterGroupName”:“ org.apache.hadoop.mapreduce.TaskCounter”,
“柜台”:[
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:1
“名称”:“ MAP_INPUT_RECORDS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:1200,
“名称”:“ MAP_OUTPUT_RECORDS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:4800,
“名称”:“ MAP_OUTPUT_BYTES”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:2235,
“名称”:“ MAP_OUTPUT_MATERIALIZED_BYTES”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:48,
“名称”:“ SPLIT_RAW_BYTES”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ COMBINE_INPUT_RECORDS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ COMBINE_OUTPUT_RECORDS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:1200,
“名称”:“ REDUCE_INPUT_GROUPS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:2235,
“名称”:“ REDUCE_SHUFFLE_BYTES”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:1200,
“名称”:“ REDUCE_INPUT_RECORDS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ REDUCE_OUTPUT_RECORDS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:2400,
“名称”:“ SPILLED_RECORDS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:1
“名称”:“ SHUFFLED_MAPS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ FAILED_SHUFFLE”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:1
“名称”:“ MERGED_MAP_OUTPUTS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:113,
“名称”:“ GC_TIME_MILLIS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:1830,
“名称”:“ CPU_MILLISECONDS”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:478068736,
“名称”:“ PHYSICAL_MEMORY_BYTES”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:2159284224,
“名称”:“ VIRTUAL_MEMORY_BYTES”
},
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:378863616,
“名称”:“ COMMITTED_HEAP_BYTES”
}
]
},
{
“ counterGroupName”:“ org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter”,
“柜台”:[
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ BYTES_READ”
}
]
},
{
“ counterGroupName”:“ org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter”,
“柜台”:[
{
“ reduceCounterValue”:0,
“ mapCounterValue”:0,
“ totalCounterValue”:0,
“名称”:“ BYTES_WRITTEN”
}
]
}
]
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / counters 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:7030 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?>
<jobCounters>
<id> job_1326381300833_2_2 </ id>
<counterGroup>
<counterGroupName>随机播放错误</ counterGroupName>
<计数器>
<name> BAD_ID </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> CONNECTION </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> IO_ERROR </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> WRONG_LENGTH </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> WRONG_MAP </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> WRONG_REDUCE </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
</ counterGroup>
<counterGroup>
<counterGroupName> org.apache.hadoop.mapreduce.FileSystemCounter </ counterGroupName>
<计数器>
<name> FILE_BYTES_READ </ name>
<totalCounterValue> 2483 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> FILE_BYTES_WRITTEN </ name>
<totalCounterValue> 108525 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<名称> FILE_READ_OPS </名称>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<名称> FILE_LARGE_READ_OPS </名称>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<名称> FILE_WRITE_OPS </名称>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> HDFS_BYTES_READ </ name>
<totalCounterValue> 48 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> HDFS_BYTES_WRITTEN </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> HDFS_READ_OPS </ name>
<totalCounterValue> 1 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> HDFS_LARGE_READ_OPS </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> HDFS_WRITE_OPS </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
</ counterGroup>
<counterGroup>
<counterGroupName> org.apache.hadoop.mapreduce.TaskCounter </ counterGroupName>
<计数器>
<名称> MAP_INPUT_RECORDS </名称>
<totalCounterValue> 1 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<名称> MAP_OUTPUT_RECORDS </名称>
<totalCounterValue> 1200 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<名称> MAP_OUTPUT_BYTES </名称>
<totalCounterValue> 4800 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> MAP_OUTPUT_MATERIALIZED_BYTES </ name>
<totalCounterValue> 2235 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> SPLIT_RAW_BYTES </ name>
<totalCounterValue> 48 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> COMBINE_INPUT_RECORDS </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> COMBINE_OUTPUT_RECORDS </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> REDUCE_INPUT_GROUPS </ name>
<totalCounterValue> 1200 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> REDUCE_SHUFFLE_BYTES </ name>
<totalCounterValue> 2235 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> REDUCE_INPUT_RECORDS </ name>
<totalCounterValue> 1200 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> REDUCE_OUTPUT_RECORDS </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> SPILLED_RECORDS </ name>
<totalCounterValue> 2400 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> SHUFFLED_MAPS </ name>
<totalCounterValue> 1 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> FAILED_SHUFFLE </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> MERGED_MAP_OUTPUTS </ name>
<totalCounterValue> 1 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> GC_TIME_MILLIS </ name>
<totalCounterValue> 113 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<名称> CPU_MILLISECONDS </名称>
<totalCounterValue> 1830 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> PHYSICAL_MEMORY_BYTES </ name>
<totalCounterValue> 478068736 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> VIRTUAL_MEMORY_BYTES </ name>
<totalCounterValue> 2159284224 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
<计数器>
<name> COMMITTED_HEAP_BYTES </ name>
<totalCounterValue> 378863616 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
</ counterGroup>
<counterGroup>
<counterGroupName> org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter </ counterGroupName>
<计数器>
<name> BYTES_READ </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
</ counterGroup>
<counterGroup>
<counterGroupName> org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter </ counterGroupName>
<计数器>
<name> BYTES_WRITTEN </ name>
<totalCounterValue> 0 </ totalCounterValue>
<mapCounterValue> 0 </ mapCounterValue>
<reduceCounterValue> 0 </ reduceCounterValue>
</ counter>
</ counterGroup>
</ jobCounters>
作业配置资源包含有关此作业的作业配置的信息。
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / conf
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
如果输出很大,这是输出的一小段。实际输出包含作业配置文件中的每个属性。
{
“ conf”:{
“ path”:“ hdfs://host.domain.com:9000 / user / user1 / .staging / job_1326381300833_0002 / job.xml”,
“财产”:[
{
“ value”:“ / home / hadoop / hdfs / data”,
“名称”:“ dfs.datanode.data.dir”
“源”:[“ hdfs-site.xml”,“ job.xml”]
},
{
“ value”:“ org.apache.hadoop.yarn.server.webproxy.amfilter.AmFilterInitializer”,
“ name”:“ hadoop.http.filter.initializers”
“源”:[“以编程方式”,“ job.xml”]
},
{
“ value”:“ / home / hadoop / tmp”,
“名称”:“ mapreduce.cluster.temp.dir”
“源”:[“ mapred-site.xml”]
},
...
]
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / conf 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:552 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?>
<conf>
<path> hdfs://host.domain.com:9000 / user / user1 / .staging / job_1326381300833_0002 / job.xml </ path>
<属性>
<name> dfs.datanode.data.dir </ name>
<value> / home / hadoop / hdfs / data </ value>
<source> hdfs-site.xml </ source>
<source> job.xml </ source>
</ property>
<属性>
<name> hadoop.http.filter.initializers </ name>
<value> org.apache.hadoop.yarn.server.webproxy.amfilter.AmFilterInitializer </ value>
<source>以编程方式</ source>
<source> job.xml </ source>
</ property>
<属性>
<name> mapreduce.cluster.temp.dir </ name>
<值> / home / hadoop / tmp </值>
<source> mapred-site.xml </ source>
</ property>
...
</ conf>
使用任务API,您可以获得代表作业中任务的资源集合。在此资源上运行GET操作时,您将获得任务对象的集合。
当您请求任务列表时,信息将作为任务对象数组返回。另请参阅任务API,以获取任务对象的语法。
| 项目 | 数据类型 | 描述 |
|---|---|---|
| 任务 | 任务对象数组(JSON)/零个或多个任务对象(XML) | 任务对象的集合。 |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
{
“任务” : {
“任务”:[
{
“进度”:100,
“ elapsedTime”:6777,
“ state”:“ Succeeded”,
“ startTime”:1326381446541,
“ id”:“ task_1326381300833_2_2_m_0”,
“ type”:“ MAP”,
“ successfulAttempt”:“ attempt_1326381300833_2_2_m_0_0”,
“ finishTime”:1326381453318
},
{
“进度”:100,
“ elapsedTime”:135559,
“ state”:“ Succeeded”,
“ startTime”:1326381446544,
“ id”:“ task_1326381300833_2_2_r_0”,
“ type”:“ REDUCE”,
“ successfulAttempt”:“ attempt_1326381300833_2_2_r_0_0”,
“ finishTime”:1326381582103
}
]
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:653 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?>
<任务>
<任务>
<startTime> 1326381446541 </ startTime>
<finishTime> 1326381453318 </ finishTime>
<elapsedTime> 6777 </ elapsedTime>
<progress> 100.0 </ progress>
<id> task_1326381300833_2_2_m_0 </ id>
<state>成功</ state>
<type> MAP </ type>
<successfulAttempt> attempt_1326381300833_2_2_m_0_0 </ successfulAttempt>
</ task>
<任务>
<startTime> 1326381446544 </ startTime>
<finishTime> 1326381582103 </ finishTime>
<elapsedTime> 135559 </ elapsedTime>
<progress> 100.0 </ progress>
<id> task_1326381300833_2_2_r_0 </ id>
<state>成功</ state>
<type> REDUCE </ type>
<successfulAttempt> attempt_1326381300833_2_2_r_0_0 </ successfulAttempt>
</ task>
</ tasks>
任务资源包含有关作业中特定任务的信息。
| 项目 | 数据类型 | 描述 |
|---|---|---|
| ID | 串 | 任务ID |
| 州 | 串 | 任务的状态-有效值为:NEW,SCHEDULED,RUNNING,Succeeded,FAILED,KILL_WAIT,KILLED |
| 类型 | 串 | 任务类型-MAP或REDUCE |
| 成功尝试 | 串 | 上次成功尝试的ID |
| 进展 | 浮动 | 任务进度百分比 |
| 开始时间 | 长 | 任务开始的时间(自时期起以毫秒为单位);如果从未启动,则为-1 |
| finishTime | 长 | 任务完成的时间(从纪元开始以毫秒为单位) |
| 经过时间 | 长 | 自应用程序启动以来经过的时间(以毫秒为单位) |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks / task_1326381300833_2_2_m_0
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
{
“任务”:{
“进度”:100,
“ elapsedTime”:6777,
“ state”:“ Succeeded”,
“ startTime”:1326381446541,
“ id”:“ task_1326381300833_2_2_m_0”,
“ type”:“ MAP”,
“ successfulAttempt”:“ attempt_1326381300833_2_2_m_0_0”,
“ finishTime”:1326381453318
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks / task_1326381300833_2_2_m_0 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:299 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?> <任务> <startTime> 1326381446541 </ startTime> <finishTime> 1326381453318 </ finishTime> <elapsedTime> 6777 </ elapsedTime> <progress> 100.0 </ progress> <id> task_1326381300833_2_2_m_0 </ id> <state>成功</ state> <type> MAP </ type> <successfulAttempt> attempt_1326381300833_2_2_m_0_0 </ successfulAttempt> </ task>
使用任务计数器API,您可以反对代表该任务所有计数器的资源集合。
| 项目 | 数据类型 | 描述 |
|---|---|---|
| ID | 串 | 任务ID |
| taskcounterGroup | counterGroup对象的数组(JSON)/零个或多个counterGroup对象的XML | 计数器组对象的集合 |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks / task_1326381300833_2_2_m_0 / counters
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
{
“ jobTaskCounters”:{
“ id”:“ task_1326381300833_2_2_m_0”,
“ taskCounterGroup”:[
{
“ counterGroupName”:“ org.apache.hadoop.mapreduce.FileSystemCounter”,
“柜台”:[
{
“值”:2363,
“名称”:“ FILE_BYTES_READ”
},
{
“值”:54372,
“名称”:“ FILE_BYTES_WRITTEN”
},
{
“值”:0,
“名称”:“ FILE_READ_OPS”
},
{
“值”:0,
“名称”:“ FILE_LARGE_READ_OPS”
},
{
“值”:0,
“名称”:“ FILE_WRITE_OPS”
},
{
“值”:0,
“名称”:“ HDFS_BYTES_READ”
},
{
“值”:0,
“名称”:“ HDFS_BYTES_WRITTEN”
},
{
“值”:0,
“名称”:“ HDFS_READ_OPS”
},
{
“值”:0,
“名称”:“ HDFS_LARGE_READ_OPS”
},
{
“值”:0,
“名称”:“ HDFS_WRITE_OPS”
}
]
},
{
“ counterGroupName”:“ org.apache.hadoop.mapreduce.TaskCounter”,
“柜台”:[
{
“值”:0,
“名称”:“ COMBINE_INPUT_RECORDS”
},
{
“值”:0,
“名称”:“ COMBINE_OUTPUT_RECORDS”
},
{
“值”:460,
“名称”:“ REDUCE_INPUT_GROUPS”
},
{
“值”:2235,
“名称”:“ REDUCE_SHUFFLE_BYTES”
},
{
“值”:460,
“名称”:“ REDUCE_INPUT_RECORDS”
},
{
“值”:0,
“名称”:“ REDUCE_OUTPUT_RECORDS”
},
{
“值”:0,
“名称”:“ SPILLED_RECORDS”
},
{
“值”:1
“名称”:“ SHUFFLED_MAPS”
},
{
“值”:0,
“名称”:“ FAILED_SHUFFLE”
},
{
“值”:1
“名称”:“ MERGED_MAP_OUTPUTS”
},
{
“值”:26,
“名称”:“ GC_TIME_MILLIS”
},
{
“值”:860,
“名称”:“ CPU_MILLISECONDS”
},
{
“值”:107839488,
“名称”:“ PHYSICAL_MEMORY_BYTES”
},
{
“值”:1123147776,
“名称”:“ VIRTUAL_MEMORY_BYTES”
},
{
“值”:57475072,
“名称”:“ COMMITTED_HEAP_BYTES”
}
]
},
{
“ counterGroupName”:“随机播放错误”,
“柜台”:[
{
“值”:0,
“名称”:“ BAD_ID”
},
{
“值”:0,
“名称”:“连接”
},
{
“值”:0,
“名称”:“ IO_ERROR”
},
{
“值”:0,
“名称”:“ WRONG_LENGTH”
},
{
“值”:0,
“名称”:“ WRONG_MAP”
},
{
“值”:0,
“名称”:“ WRONG_REDUCE”
}
]
},
{
“ counterGroupName”:“ org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter”,
“柜台”:[
{
“值”:0,
“名称”:“ BYTES_WRITTEN”
}
]
}
]
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks / task_1326381300833_2_2_m_0 / counters 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:2660 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?>
<jobTaskCounters>
<id> task_1326381300833_2_2_m_0 </ id>
<taskCounterGroup>
<counterGroupName> org.apache.hadoop.mapreduce.FileSystemCounter </ counterGroupName>
<计数器>
<name> FILE_BYTES_READ </ name>
<value> 2363 </ value>
</ counter>
<计数器>
<name> FILE_BYTES_WRITTEN </ name>
<value> 54372 </ value>
</ counter>
<计数器>
<名称> FILE_READ_OPS </名称>
<value> 0 </ value>
</ counter>
<计数器>
<名称> FILE_LARGE_READ_OPS </名称>
<value> 0 </ value>
</ counter>
<计数器>
<名称> FILE_WRITE_OPS </名称>
<value> 0 </ value>
</ counter>
<计数器>
<name> HDFS_BYTES_READ </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> HDFS_BYTES_WRITTEN </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> HDFS_READ_OPS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> HDFS_LARGE_READ_OPS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> HDFS_WRITE_OPS </ name>
<value> 0 </ value>
</ counter>
</ taskCounterGroup>
<taskCounterGroup>
<counterGroupName> org.apache.hadoop.mapreduce.TaskCounter </ counterGroupName>
<计数器>
<name> COMBINE_INPUT_RECORDS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> COMBINE_OUTPUT_RECORDS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> REDUCE_INPUT_GROUPS </ name>
<value> 460 </ value>
</ counter>
<计数器>
<name> REDUCE_SHUFFLE_BYTES </ name>
<value> 2235 </ value>
</ counter>
<计数器>
<name> REDUCE_INPUT_RECORDS </ name>
<value> 460 </ value>
</ counter>
<计数器>
<name> REDUCE_OUTPUT_RECORDS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> SPILLED_RECORDS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> SHUFFLED_MAPS </ name>
<value> 1 </ value>
</ counter>
<计数器>
<name> FAILED_SHUFFLE </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> MERGED_MAP_OUTPUTS </ name>
<value> 1 </ value>
</ counter>
<计数器>
<name> GC_TIME_MILLIS </ name>
<value> 26 </ value>
</ counter>
<计数器>
<名称> CPU_MILLISECONDS </名称>
<value> 860 </ value>
</ counter>
<计数器>
<name> PHYSICAL_MEMORY_BYTES </ name>
<value> 107839488 </ value>
</ counter>
<计数器>
<name> VIRTUAL_MEMORY_BYTES </ name>
<value> 1123147776 </ value>
</ counter>
<计数器>
<name> COMMITTED_HEAP_BYTES </ name>
<value> 57475072 </ value>
</ counter>
</ taskCounterGroup>
<taskCounterGroup>
<counterGroupName>随机播放错误</ counterGroupName>
<计数器>
<name> BAD_ID </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> CONNECTION </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> IO_ERROR </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> WRONG_LENGTH </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> WRONG_MAP </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> WRONG_REDUCE </ name>
<value> 0 </ value>
</ counter>
</ taskCounterGroup>
<taskCounterGroup>
<counterGroupName> org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter </ counterGroupName>
<计数器>
<name> BYTES_WRITTEN </ name>
<value> 0 </ value>
</ counter>
</ taskCounterGroup>
</ jobTaskCounters>
使用任务尝试API,您可以获得代表作业中任务尝试的资源集合。在此资源上运行GET操作时,将获得“任务尝试对象”的集合。
当您请求任务尝试列表时,该信息将作为任务尝试对象数组返回。另请参阅Task Attempt API,以获取任务对象的语法。
| 项目 | 数据类型 | 描述 |
|---|---|---|
| taskAttempt | 任务尝试对象(JSON)/零个或多个任务尝试对象(XML)的数组 | 任务尝试对象的集合 |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks / task_1326381300833_2_2_m_0 / attempts
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
{
“ taskAttempts”:{
“ taskAttempt”:[
{
“ assignedContainerId”:“ container_1326381300833_0002_01_000002”,
“进度”:100,
“ elapsedTime”:2638,
“ state”:“ Succeeded”,
“诊断”:“”,
“ rack”:“ / 98.139.92.0”,
“ nodeHttpAddress”:“ host.domain.com:8042”,
“ startTime”:1326381450680,
“ id”:“ attempt_1326381300833_2_2_m_0_0”,
“ type”:“ MAP”,
“ finishTime”:1326381453318
}
]
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks / task_1326381300833_2_2_m_0 / attempts 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:537 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?>
<taskAttempts>
<taskAttempt>
<startTime> 1326381450680 </ startTime>
<finishTime> 1326381453318 </ finishTime>
<elapsedTime> 2638 </ elapsedTime>
<progress> 100.0 </ progress>
<id> attempt_1326381300833_2_2_m_0_0 </ id>
<rack> /98.139.92.0 </ rack>
<state>成功</ state>
<nodeHttpAddress> host.domain.com:8042 </ nodeHttpAddress>
<diagnostics />
<type> MAP </ type>
<assignedContainerId> container_1326381300833_0002_01_000002 </ assignedContainerId>
</ taskAttempt>
</ taskAttempts>
任务尝试资源包含有关作业中特定任务尝试的信息。
| 项目 | 数据类型 | 描述 |
|---|---|---|
| ID | 串 | 任务ID |
| 架 | 串 | 机架 |
| 州 | 串 | 任务尝试的状态-有效值为:NEW,UNASSIGNED,ASSIGNED,RUNNING,COMMIT_PENDING,SUCCESS_CONTAINER_CLEANUP,SUCCEEDED,FAIL_CONTAINER_CLEANUP,FAIL_TASK_CLEANUP,FAILED,KILL_CONTAINER_CLEANUP,KILL_TAIL_ |
| 类型 | 串 | 任务类型 |
| AssignedContainerId | 串 | 此尝试分配给的容器ID |
| nodeHttpAddress | 串 | 尝试执行此任务的节点的http地址 |
| 诊断 | 串 | 诊断消息 |
| 进展 | 浮动 | 任务尝试的进度百分比 |
| 开始时间 | 长 | 任务尝试开始的时间(自时期起以毫秒为单位) |
| finishTime | 长 | 任务尝试完成的时间(自纪元以来以毫秒为单位) |
| 经过时间 | 长 | 自任务开始尝试以来经过的时间(以毫秒为单位) |
对于减少任务尝试,您还具有以下字段:
| 项目 | 数据类型 | 描述 |
|---|---|---|
| shuffleFinishTime | 长 | 随机播放结束的时间(自纪元以来以毫秒为单位) |
| mergeFinishTime | 长 | 合并完成的时间(自纪元以来以毫秒为单位) |
| 经过的ShuffleTime | 长 | 随机播放阶段完成所需的时间(减少任务开始和随机播放完成之间的时间,以毫秒为单位) |
| 经过的合并时间 | 长 | 合并阶段完成所需的时间(混洗完成和合并完成之间的时间,以毫秒为单位) |
| 经过减少时间 | 长 | 还原阶段完成所需的时间(从合并完成到还原任务结束之间的时间,以毫秒为单位) |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks / task_1326381300833_2_2_m_0 / attempts / attempt_1326381300833_2_2_m_0_0
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
{
“ taskAttempt”:{
“ assignedContainerId”:“ container_1326381300833_0002_01_000002”,
“进度”:100,
“ elapsedTime”:2638,
“ state”:“ Succeeded”,
“诊断”:“”,
“ rack”:“ / 98.139.92.0”,
“ nodeHttpAddress”:“ host.domain.com:8042”,
“ startTime”:1326381450680,
“ id”:“ attempt_1326381300833_2_2_m_0_0”,
“ type”:“ MAP”,
“ finishTime”:1326381453318
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks / task_1326381300833_2_2_m_0 / attempts / attempt_1326381300833_2_2_m_0_0 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:691 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?> <taskAttempt> <startTime> 1326381450680 </ startTime> <finishTime> 1326381453318 </ finishTime> <elapsedTime> 2638 </ elapsedTime> <progress> 100.0 </ progress> <id> attempt_1326381300833_2_2_m_0_0 </ id> <rack> /98.139.92.0 </ rack> <state>成功</ state> <nodeHttpAddress> host.domain.com:8042 </ nodeHttpAddress> <diagnostics /> <type> MAP </ type> <assignedContainerId> container_1326381300833_0002_01_000002 </ assignedContainerId> </ taskAttempt>
使用任务尝试计数器API,您可以反对代表该任务尝试的所有计数器的资源集合。
| 项目 | 数据类型 | 描述 |
|---|---|---|
| ID | 串 | 任务尝试ID |
| taskAttemptcounterGroup | 任务尝试counterGroup对象(JSON)/零个或多个任务尝试counterGroup对象(XML)的数组 | 任务尝试计数器组对象的集合 |
| 项目 | 数据类型 | 描述 |
|---|---|---|
| counterGroupName | 串 | 柜台组名称 |
| 计数器 | 计数器对象数组(JSON)/零个或多个计数器对象(XML) | 柜台对象的集合 |
JSON回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks / task_1326381300833_2_2_m_0 / attempts / attempt_1326381300833_2_2_m_0_0 / counters
响应标题:
HTTP / 1.1 200 OK 内容类型:application / json 传输编码:分块 服务器:码头(6.1.26)
响应主体:
{
“ jobTaskAttemptCounters”:{
“ taskAttemptCounterGroup”:[
{
“ counterGroupName”:“ org.apache.hadoop.mapreduce.FileSystemCounter”,
“柜台”:[
{
“值”:2363,
“名称”:“ FILE_BYTES_READ”
},
{
“值”:54372,
“名称”:“ FILE_BYTES_WRITTEN”
},
{
“值”:0,
“名称”:“ FILE_READ_OPS”
},
{
“值”:0,
“名称”:“ FILE_LARGE_READ_OPS”
},
{
“值”:0,
“名称”:“ FILE_WRITE_OPS”
},
{
“值”:0,
“名称”:“ HDFS_BYTES_READ”
},
{
“值”:0,
“名称”:“ HDFS_BYTES_WRITTEN”
},
{
“值”:0,
“名称”:“ HDFS_READ_OPS”
},
{
“值”:0,
“名称”:“ HDFS_LARGE_READ_OPS”
},
{
“值”:0,
“名称”:“ HDFS_WRITE_OPS”
}
]
},
{
“ counterGroupName”:“ org.apache.hadoop.mapreduce.TaskCounter”,
“柜台”:[
{
“值”:0,
“名称”:“ COMBINE_INPUT_RECORDS”
},
{
“值”:0,
“名称”:“ COMBINE_OUTPUT_RECORDS”
},
{
“值”:460,
“名称”:“ REDUCE_INPUT_GROUPS”
},
{
“值”:2235,
“名称”:“ REDUCE_SHUFFLE_BYTES”
},
{
“值”:460,
“名称”:“ REDUCE_INPUT_RECORDS”
},
{
“值”:0,
“名称”:“ REDUCE_OUTPUT_RECORDS”
},
{
“值”:0,
“名称”:“ SPILLED_RECORDS”
},
{
“值”:1
“名称”:“ SHUFFLED_MAPS”
},
{
“值”:0,
“名称”:“ FAILED_SHUFFLE”
},
{
“值”:1
“名称”:“ MERGED_MAP_OUTPUTS”
},
{
“值”:26,
“名称”:“ GC_TIME_MILLIS”
},
{
“值”:860,
“名称”:“ CPU_MILLISECONDS”
},
{
“值”:107839488,
“名称”:“ PHYSICAL_MEMORY_BYTES”
},
{
“值”:1123147776,
“名称”:“ VIRTUAL_MEMORY_BYTES”
},
{
“值”:57475072,
“名称”:“ COMMITTED_HEAP_BYTES”
}
]
},
{
“ counterGroupName”:“随机播放错误”,
“柜台”:[
{
“值”:0,
“名称”:“ BAD_ID”
},
{
“值”:0,
“名称”:“连接”
},
{
“值”:0,
“名称”:“ IO_ERROR”
},
{
“值”:0,
“名称”:“ WRONG_LENGTH”
},
{
“值”:0,
“名称”:“ WRONG_MAP”
},
{
“值”:0,
“名称”:“ WRONG_REDUCE”
}
]
},
{
“ counterGroupName”:“ org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter”,
“柜台”:[
{
“值”:0,
“名称”:“ BYTES_WRITTEN”
}
]
}
],
“ id”:“ attempt_1326381300833_2_2_m_0_0”
}
}
XML回应
HTTP请求:
GET http:// history-server-http-address:port / ws / v1 / history / mapreduce / jobs / job_1326381300833_2_2 / tasks / task_1326381300833_2_2_m_0 / attempts / attempt_1326381300833_2_2_m_0_0 / counters 接受:application / xml
响应标题:
HTTP / 1.1 200 OK 内容类型:application / xml 内容长度:2735 服务器:码头(6.1.26)
响应主体:
<?xml版本=“ 1.0”编码=“ UTF-8”独立=“是”?>
<jobTaskAttemptCounters>
<id> attempt_1326381300833_2_2_m_0_0 </ id>
<taskAttemptCounterGroup>
<counterGroupName> org.apache.hadoop.mapreduce.FileSystemCounter </ counterGroupName>
<计数器>
<name> FILE_BYTES_READ </ name>
<value> 2363 </ value>
</ counter>
<计数器>
<name> FILE_BYTES_WRITTEN </ name>
<value> 54372 </ value>
</ counter>
<计数器>
<名称> FILE_READ_OPS </名称>
<value> 0 </ value>
</ counter>
<计数器>
<名称> FILE_LARGE_READ_OPS </名称>
<value> 0 </ value>
</ counter>
<计数器>
<名称> FILE_WRITE_OPS </名称>
<value> 0 </ value>
</ counter>
<计数器>
<name> HDFS_BYTES_READ </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> HDFS_BYTES_WRITTEN </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> HDFS_READ_OPS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> HDFS_LARGE_READ_OPS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> HDFS_WRITE_OPS </ name>
<value> 0 </ value>
</ counter>
</ taskAttemptCounterGroup>
<taskAttemptCounterGroup>
<counterGroupName> org.apache.hadoop.mapreduce.TaskCounter </ counterGroupName>
<计数器>
<name> COMBINE_INPUT_RECORDS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> COMBINE_OUTPUT_RECORDS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> REDUCE_INPUT_GROUPS </ name>
<value> 460 </ value>
</ counter>
<计数器>
<name> REDUCE_SHUFFLE_BYTES </ name>
<value> 2235 </ value>
</ counter>
<计数器>
<name> REDUCE_INPUT_RECORDS </ name>
<value> 460 </ value>
</ counter>
<计数器>
<name> REDUCE_OUTPUT_RECORDS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> SPILLED_RECORDS </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> SHUFFLED_MAPS </ name>
<value> 1 </ value>
</ counter>
<计数器>
<name> FAILED_SHUFFLE </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> MERGED_MAP_OUTPUTS </ name>
<value> 1 </ value>
</ counter>
<计数器>
<name> GC_TIME_MILLIS </ name>
<value> 26 </ value>
</ counter>
<计数器>
<名称> CPU_MILLISECONDS </名称>
<value> 860 </ value>
</ counter>
<计数器>
<name> PHYSICAL_MEMORY_BYTES </ name>
<value> 107839488 </ value>
</ counter>
<计数器>
<name> VIRTUAL_MEMORY_BYTES </ name>
<value> 1123147776 </ value>
</ counter>
<计数器>
<name> COMMITTED_HEAP_BYTES </ name>
<value> 57475072 </ value>
</ counter>
</ taskAttemptCounterGroup>
<taskAttemptCounterGroup>
<counterGroupName>随机播放错误</ counterGroupName>
<计数器>
<name> BAD_ID </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> CONNECTION </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> IO_ERROR </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> WRONG_LENGTH </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> WRONG_MAP </ name>
<value> 0 </ value>
</ counter>
<计数器>
<name> WRONG_REDUCE </ name>
<value> 0 </ value>
</ counter>
</ taskAttemptCounterGroup>
<taskAttemptCounterGroup>
<counterGroupName> org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter </ counterGroupName>
<计数器>
<name> BYTES_WRITTEN </ name>
<value> 0 </ value>
</ counter>
</ taskAttemptCounterGroup>
</ jobTaskAttemptCounters>