The Pangool MapReduce API is mainly formed by:
Subclasses of this class will be ready to be used as Mappers in Pangool Jobs.
This class requires two generic types: the ones that refer to the input format.
This is because TupleMappers always emit tuples as intermediate output, so we only need to add the types relative to the input format.
This class has three available methods: setup(), map() and cleanup().
Subclasses of this class will be ready to be used as reducers in Pangool jobs.
This class requires two generic types: the ones that refer to the output format.
This is because TupleReducers always receive ITuple groups and values from the intermediate output, so we only need to add the types relative to the output format.
This class has three available methods: setup(), reduce() and cleanup().
This class can also be used as a Combiner, as long as the output types are (ITuple, NullWritable).
(An instance of this class is received by both TupleMapper and TupleReducer. The user can get the standard Hadoop Context object through getHadoopContext() to use counters, progress(), etc.)
Reducer to be used when using rollup. It will have extra methods: onOpen(), onClose(). For information on rollup, check the rollup section in the user guide.
Use this class to conveniently create jobs that only have Mapper steps.
Use this class to create job instances that use the Pangool API. The most important methods are:
addIntermediateSchema | Allows the user to define intermediate Schemas. At least one must be defined. When performing joins, usually more than one schema will be defined (see the joins section for more information). |
addInput | Allows the user to add an input Path with an associated input format and TupleMapper. You can add an arbitrary number of inputs with this same method. |
addTupleInput | This method must be used when reading tuple inputs (files that were generated by Pangool jobs that wrote tuples as output). |
setOutput | Allows the user to define the job’s main output Path and format. |
setTupleOutput | This method must be used when writing tuples as the main output of the Job. It will have an associated Schema so that Pangool knows how to write the Tuples. |
addNamedOutput | See named outputs. |
addNamedTupleOutput | See named outputs. |
setDefaultNamedOutput | See named outputs. |
setTupleReducer | Sets the TupleReducer instance to be used. |
setTupleCombiner | Sets the TupleReducer |
setGroupByFields / setOrderBy | Configures how Pangool will sort and group by the intermediate tuples. For more info, check the “Group & Sort by” section. |
createJob() | Returns the job instance read to be run. |
Use this input format for reading text files into Pangool's Tuples. See Text I/O for more info.
Use this output format for writing text files out of Pangool's Tuples. See Text I/O for more info.
Use this Mapper implementation when your Mapper only needs to emit the Tuples as they are being read (when using Tuple inputs).
Use this Reducer implementation when your Reducer only needs to emit the Tuples as they are being received in the Reducer (including all the Tuples in the values’ Iterator).