org.apache.hadoop.mapred.MiniMRCluster#waitUntilIdle ( )源码实例Demo

下面列出了org.apache.hadoop.mapred.MiniMRCluster#waitUntilIdle ( ) 实例代码,或者点击链接到github查看源代码,也可以在右侧发表评论。

源代码1 项目: RDFS   文件: TestUlimit.java
/**
 * This tests the setting of memory limit for streaming processes.
 * This will launch a streaming app which will allocate 10MB memory.
 * First, program is launched with sufficient memory. And test expects
 * it to succeed. Then program is launched with insufficient memory and 
 * is expected to be a failure.  
 */
public void testCommandLine() {
  if (StreamUtil.isCygwin()) {
    return;
  }
  try {
    final int numSlaves = 2;
    Configuration conf = new Configuration();
    dfs = new MiniDFSCluster(conf, numSlaves, true, null);
    fs = dfs.getFileSystem();
    
    mr = new MiniMRCluster(numSlaves, fs.getUri().toString(), 1);
    writeInputFile(fs, inputPath);
    map = StreamUtil.makeJavaCommand(UlimitApp.class, new String[]{});  
    runProgram(SET_MEMORY_LIMIT);
    fs.delete(outputPath, true);
    assertFalse("output not cleaned up", fs.exists(outputPath));
    mr.waitUntilIdle();
  } catch(IOException e) {
    fail(e.toString());
  } finally {
    mr.shutdown();
    dfs.shutdown();
  }
}
 
源代码2 项目: hadoop-gpu   文件: TestUlimit.java
/**
 * This tests the setting of memory limit for streaming processes.
 * This will launch a streaming app which will allocate 10MB memory.
 * First, program is launched with sufficient memory. And test expects
 * it to succeed. Then program is launched with insufficient memory and 
 * is expected to be a failure.  
 */
public void testCommandLine() {
  if (StreamUtil.isCygwin()) {
    return;
  }
  try {
    final int numSlaves = 2;
    Configuration conf = new Configuration();
    dfs = new MiniDFSCluster(conf, numSlaves, true, null);
    fs = dfs.getFileSystem();
    
    mr = new MiniMRCluster(numSlaves, fs.getUri().toString(), 1);
    writeInputFile(fs, inputPath);
    map = StreamUtil.makeJavaCommand(UlimitApp.class, new String[]{});  
    runProgram(SET_MEMORY_LIMIT);
    fs.delete(outputPath, true);
    assertFalse("output not cleaned up", fs.exists(outputPath));
    mr.waitUntilIdle();
  } catch(IOException e) {
    fail(e.toString());
  } finally {
    mr.shutdown();
    dfs.shutdown();
  }
}