How to solve ' All the input files must be Sequence files. Invalid file: '' '

1 visualización (últimos 30 días)
Hadoop version 1.2.1 Matlab version 2015a
Linux ubuntu 14.
I install Hadoop with a small cluster(one master and one slave).
It is success to run the example 'wordcount'in Hadoop.
And it is success to read the data from the HDFS through the Matlab.
But when I try to run the example in Matlab 'Run mapreduce on a Hadoop Cluster',I failed.
It shows that,
run_mapreduce_on_a_hadoop
ans =
ArrDelay
________
8
8
21
13
4
59
3
11
Parallel mapreduce execution on the Hadoop cluster:
********************************
* MAPREDUCE PROGRESS *
********************************
Map 0% Reduce 0%
Map 100% Reduce 33%
Map 100% Reduce 71%
Map 100% Reduce 100%
Error using mapreduce (line 100)
All the input files must be Sequence files.
Invalid file: ''
Error in run_mapreduce_on_a_hadoop (line 24)
meanDelay = mapreduce(ds,@meanArrivalDelayMapper,@meanArrivalDelayReducer,mr,...
There is my Matlab codes
setenv('HADOOP_HOME','/usr/local/hadoop');
cluster = parallel.cluster.Hadoop;
cluster.HadoopProperties('mapred.job.tracker') = 'ubuntu:50031';
cluster.HadoopProperties('fs.default.name') = 'hdfs://ubuntu:8020';
outputFolder = '/home/rjy/logs/hadooplog';
mr = mapreducer(cluster);
ds = datastore('airlinesmall.csv','TreatAsMissing','NA','SelectedVariableNames','ArrDelay','ReadSize',1000);
preview(ds)
meanDelay = mapreduce(ds,@meanArrivalDelayMapper,@meanArrivalDelayReducer,mr,...
'OutputFolder',outputFolder);
What's ' All the input files must be Sequence files. Invalid file: '' ' mean?
I have never seen it before. I just copy the code in the Matlab document.
I wish to know how to solve this problem. I have tried many methods to solve it.
Please give me some suggestions. Thanks.

Respuesta aceptada

Rick Amos
Rick Amos el 17 de Ag. de 2015
This error message occurred because MATLAB could not find the output files generated by the Hadoop Job. For now, this error message should be treated as equivalent to MATLAB erroring that it could not find the output of the Hadoop Job. To resolve this error, make sure the output folder is a location that can be accessed by both your local machine and the Hadoop cluster.
I see that outputFolder is in "/home/rjy/logs/hadooplog". For MATLAB, this points to the home folder on your machine and is likely not to be accessible by the Hadoop cluster. As an alternative, could you try:
outputFolder = 'hdfs://ubuntu:8020/home/rjy/out';
This location is guaranteed to be accesible by both your local machine and the Hadoop cluster.
  1 comentario
Jingyu Ru
Jingyu Ru el 20 de Ag. de 2015
Thank you very much to help me slove this problem! I really really appreciate it!

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Deploy Tall Arrays to a Spark Enabled Hadoop Cluster en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by