[Perldl] How to find out cause of out of memory

Clifford Sobchuk clifford.sobchuk at ericsson.com
Tue Feb 14 05:26:48 HST 2012

Hi Folks,

I am running in to a problem where I am putting in a large amount of data (variable depending on log size). The data is being pushed in to a perl array, and then converted in to a piddle. I think that it might be the conversion from perl array to piddle, but am not sure. How can I find out where the issue exists and correct it. The end users computer (laptop) will often be in this situation apparently. Since the data is intermixed with text that needs to be used to hash each specific attribute, I can't simply use an rgrep or rcols import. I can use rcols for each section, this would result in using glue to build up the piddle slowly (groups of 20 to 100 - depending on the datum for that attribute).

Example pseudo code.
Foreach line {
        $index1 = $1 if (/index1:\s(\d+)\w+);
        $index2 ...
        if $datastart && ! $dataend {
                push @{$myhash{$index1}{$index2}{datum1}},$1 if (/mydata/);
                $dataend = 1 if (/$eod/);
Foreach sort(keys %myhash) {
        ....for each index

The raw text files are on the order of 0.5 to 14 GB and are being run on win32 (vista - which I know has a 2GB limit for applications). Hope that this provides enough information to scope the issue.


Core RF Engineering
Calgary, AB, Canada
Phone 613-667-1974  ECN 8109 x71974
Mobile 403-819-9233
clifford.sobchuk at ericsson.com<mailto:clifford.sobchuk at ericsson.com>
yahoo: sobchuk

"The author works for Telefonaktiebolaget L M Ericsson ("Ericsson"), who is solely responsible for this email and its contents. All inquiries regarding this email should be addressed to Ericsson. The web site for Ericsson is www.ericsson.com."

This Communication is Confidential. We only send and receive email on the basis of the terms set out at www.ericsson.com/email_disclaimer<http://www.ericsson.com/email_disclaimer>

More information about the Perldl mailing list