62 views (last 30 days)
ha ha on 16 Jun 2019
Commented: per isakson on 16 Jun 2019
Let's say I have a very very large .txt file with (200millions row & 11 columns= 200m-by-11 matrix). All data are numeric number value (e.g., 10, 100 ,200...) . My file is ~ 20GB
When I load this data in Matlab, the errors occurs: "Out of Memory"
clear;clc;filename = 'test42.txt'; load('test42.txt');P = test42(:,1:3);%get data=coordinate(x,y,z) from set of data "column" at (all row & column 1,2,3)
My PC system: win10-64 bit, RAM 16GB, core-i7, HDD:1TB; SSD 1TB
Actually, I just want to load the data contain only first 3 columns. It mean, the matrix that I want to get is: 200m-by-3 matrix. And with the reduce column, I hope Matlab is able to load data.
Do you know any way to read the whole dataset, or read the reduce data with only first 3 columns? Thanks.
The format of my file is like this. per isakson on 16 Jun 2019
Edited: per isakson on 16 Jun 2019
You didn't say how much physical memory is in your system.
Matlab provides ways to handle large text files, Large Files and Big Data, but forget that for a moment. It's not a free lunch.
" (e.g., 10, 100 ,200...)" Does that mean positive whole numbers only? If so, do you know the maximum value of the three first columns, respectively?
"When I load this data" What exactly did you do?
The three first columns will take 4.8GB to store as double.
>> 200*1e6*3*8/1e9
ans =
4.8
But do you need to use double?
Simplest first, convert the three first columns to double and skip the remaining seven columns. Try
%%
fid = fopen( 'c:\whatever\the_huge_text_file.txt', 'r' );
cac = textscan( fid, '%f%f%f%*f%*f%*f%*f%*f%*f%*f', 'HeaderLines',0 );
fclose( fid );
Or use an alternative formatspec, which is a bit easier to read. It says read the first three columns and skip the rest up til a new-line character.
cac = textscan( num2str([1:10]), '%f%f%f%*[^\n]', 'HeaderLines',0 );
Next step requires input from you on the numbers in the three first columns.
In response to the edited question
To keep the precision of the numbers in the text file you need to use double. I'm positive that
%%
fid = fopen( 'c:\whatever\test42.txt', 'r' );
cac = textscan( fid, '%f%f%f%*f%*f%*f%*f%*f%*f%*f', 'CollectOutput',true );
fclose( fid );
per isakson on 16 Jun 2019
Thank you for showing these results.
The three elapsed times puzzles me. The differences are so large.
However, one thing I know is that it's difficult to reproduce results from reading files. A specific result depends on the state of the system cache. Large parts of the file may already be in the cache and furthermore the cache may be more or less "fragmented".
You use the statement
P=cat(1, inputfile{:});
It's possible to increase readability (imo) and save a second or two. Try
>> cssm
Elapsed time is 1.812582 seconds.
Elapsed time is 2.085810 seconds.
Elapsed time is 0.000041 seconds.
Elapsed time is 0.314024 seconds.
>> cssm
Elapsed time is 1.700285 seconds.
Elapsed time is 2.239885 seconds.
Elapsed time is 0.000040 seconds.
Elapsed time is 0.370981 seconds.
%% Sample data
tic, cac = { reshape( [1:194500412*3], [],3 ) }; toc
%%
tic, P1 = cat(1,cac{:}); toc
%%
tic, P2 = cac{1}; toc
%%
tic, P3 = cell2mat( cac ); toc
and P1 and P2 are equal
>> all( P1==P2, 1 )
ans =
1×3 logical array
1 1 1

Walter Roberson on 16 Jun 2019
textscan() is more likely to succeed than some of the other alternatives.
Most reliable would be to pre-allocate all of the storage, and then to process chunks of the file at a time (for efficiency). For example, if you told textscan() to read 50 lines of the file, that would be just under 4 Kb, which would fit easily into MATLAB's "small blocks" storage strategy where it can extend an array in place if the array is sufficiently small. Copy the 50 rows into the master matrix, proceed to next chunk.