Message boards :
Number crunching :
So how long is your C116 Task?
Message board moderation
Author | Message |
---|---|
Werinbert Send message Joined: 9 May 13 Posts: 10 Credit: 100,312 RAC: 0 |
Just got one back that took longer than expected. C116 73,474.32sec on an i5 2410M@2.3GHz/3 cores This is pushing the limit of how long I want to tie-up my computer (I would prefer less than 10 hours). The Yafu task pushed aside three PrimeGrid tasks which will make it back on time... if only barely. |
yoyo_rkn Volunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message Joined: 22 Aug 11 Posts: 736 Credit: 17,612,101 RAC: 38 |
Thanks for posting the runtimes. |
Bruce Kennedy Send message Joined: 8 Sep 11 Posts: 4 Credit: 1,048,224 RAC: 0 |
I have a C116 that has been running for 24 hours now using all 8 cores and is at 99.999% for the last 4 hours or so. I appears the CPU time is stuck also at 03:25:31. Will it ever finish on my i7-4700HQ? If it's not done in the next 12 hours or so I'll cancel it. |
yoyo_rkn Volunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message Joined: 22 Aug 11 Posts: 736 Credit: 17,612,101 RAC: 38 |
You will get credits also after the deadline, so don't worry. Can you post the content of the slot folder and check when it was changed the last time? yoyo |
Bruce Kennedy Send message Joined: 8 Sep 11 Posts: 4 Credit: 1,048,224 RAC: 0 |
The C116 WU is now at 100% and waiting to run. Here are the contents of the text files in the slot directory. 10/05/14 22:50:24 v1.34.5 @ BRUCE, 10/05/14 22:50:24 v1.34.5 @ BRUCE, **************************** 10/05/14 22:50:24 v1.34.5 @ BRUCE, Starting factorization of 38248148288262939877592328434712626897417157843496786404118652121030605576532119227866057116701426977912061693502551 10/05/14 22:50:24 v1.34.5 @ BRUCE, using pretesting plan: normal 10/05/14 22:50:24 v1.34.5 @ BRUCE, no tune info: using qs/gnfs crossover of 95 digits 10/05/14 22:50:24 v1.34.5 @ BRUCE, **************************** 10/05/14 22:50:24 v1.34.5 @ BRUCE, rho: x^2 + 3, starting 1000 iterations on C116 10/05/14 22:50:25 v1.34.5 @ BRUCE, rho: x^2 + 2, starting 1000 iterations on C116 10/05/14 22:50:25 v1.34.5 @ BRUCE, rho: x^2 + 1, starting 1000 iterations on C116 10/05/14 22:50:25 v1.34.5 @ BRUCE, pm1: starting B1 = 150K, B2 = gmp-ecm default on C116 10/05/14 22:50:25 v1.34.5 @ BRUCE, current ECM pretesting depth: 0.00 10/05/14 22:50:25 v1.34.5 @ BRUCE, scheduled 30 curves at B1=2000 toward target pretesting depth of 35.69 10/05/14 22:50:26 v1.34.5 @ BRUCE, Finished 30 curves using Lenstra ECM method on C116 input, B1=2K, B2=gmp-ecm default 10/05/14 22:50:26 v1.34.5 @ BRUCE, current ECM pretesting depth: 15.18 10/05/14 22:50:26 v1.34.5 @ BRUCE, scheduled 74 curves at B1=11000 toward target pretesting depth of 35.69 10/05/14 22:50:41 v1.34.5 @ BRUCE, Finished 74 curves using Lenstra ECM method on C116 input, B1=11K, B2=gmp-ecm default 10/05/14 22:50:41 v1.34.5 @ BRUCE, current ECM pretesting depth: 20.24 10/05/14 22:50:41 v1.34.5 @ BRUCE, scheduled 214 curves at B1=50000 toward target pretesting depth of 35.69 10/05/14 22:52:44 v1.34.5 @ BRUCE, Finished 214 curves using Lenstra ECM method on C116 input, B1=50K, B2=gmp-ecm default 10/05/14 22:52:44 v1.34.5 @ BRUCE, pm1: starting B1 = 3750K, B2 = gmp-ecm default on C116 10/05/14 22:52:47 v1.34.5 @ BRUCE, current ECM pretesting depth: 25.33 10/05/14 22:52:47 v1.34.5 @ BRUCE, scheduled 430 curves at B1=250000 toward target pretesting depth of 35.69 10/05/14 23:07:07 v1.34.5 @ BRUCE, Finished 430 curves using Lenstra ECM method on C116 input, B1=250K, B2=gmp-ecm default 10/05/14 23:07:07 v1.34.5 @ BRUCE, pm1: starting B1 = 15M, B2 = gmp-ecm default on C116 10/05/14 23:07:18 v1.34.5 @ BRUCE, current ECM pretesting depth: 30.45 10/05/14 23:07:18 v1.34.5 @ BRUCE, scheduled 904 curves at B1=1000000 toward target pretesting depth of 35.69 10/06/14 01:26:07 v1.34.5 @ BRUCE, Finished 904 curves using Lenstra ECM method on C116 input, B1=1M, B2=gmp-ecm default 10/06/14 01:26:07 v1.34.5 @ BRUCE, current ECM pretesting depth: 35.56 10/06/14 01:26:07 v1.34.5 @ BRUCE, scheduled 66 curves at B1=3000000 toward target pretesting depth of 35.69 10/06/14 01:54:48 v1.34.5 @ BRUCE, Finished 66 curves using Lenstra ECM method on C116 input, B1=3M, B2=gmp-ecm default 10/06/14 01:54:48 v1.34.5 @ BRUCE, final ECM pretested depth: 35.70 10/06/14 01:54:48 v1.34.5 @ BRUCE, scheduler: switching to sieve method 10/06/14 01:54:48 v1.34.5 @ BRUCE, nfs: commencing nfs on c116: 38248148288262939877592328434712626897417157843496786404118652121030605576532119227866057116701426977912061693502551 10/06/14 01:54:48 v1.34.5 @ BRUCE, nfs: commencing poly selection with 1 threads 10/06/14 01:54:48 v1.34.5 @ BRUCE, nfs: setting deadline of 7650 seconds 10/06/14 03:59:15 v1.34.5 @ BRUCE, nfs: completed 44 ranges of size 250 in 7466.9067 seconds 10/06/14 03:59:15 v1.34.5 @ BRUCE, nfs: best poly = # norm 4.575890e-011 alpha -5.879554 e 4.598e-010 rroots 3 10/06/14 03:59:15 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 05:11:38 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 06:25:39 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 07:34:46 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 08:49:54 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 10:00:20 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 11:16:24 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 12:30:38 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 13:43:13 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 15:04:15 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 16:20:10 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 17:31:18 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 18:45:41 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 19:58:42 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 21:10:20 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 22:26:56 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/06/14 23:45:59 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/07/14 00:57:21 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads 10/07/14 02:17:26 v1.34.5 @ BRUCE, nfs: commencing lattice sieving with 1 threads LatSieveTime: 4329 LatSieveTime: 4425 LatSieveTime: 4129 LatSieveTime: 4493 LatSieveTime: 4205 LatSieveTime: 4546 LatSieveTime: 4436 LatSieveTime: 4339 LatSieveTime: 4842 LatSieveTime: 4536 LatSieveTime: 4251 LatSieveTime: 4440 LatSieveTime: 4363 LatSieveTime: 4281 LatSieveTime: 4575 LatSieveTime: 4724 LatSieveTime: 4266 LatSieveTime: 4775 LatSieveTime: 4860 0 12348.187500 |
yoyo_rkn Volunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message Joined: 22 Aug 11 Posts: 736 Credit: 17,612,101 RAC: 38 |
It runs only with 1 thread. There was a short problem on the server which issued workunits only with 1 thread. So it will take longer to finish this workunit. |
Bruce Kennedy Send message Joined: 8 Sep 11 Posts: 4 Credit: 1,048,224 RAC: 0 |
It may be using 1 thread but it has all 8 tied up and prevents other work from being done. I'm going to cancel it now, it's be running for 36 hours. |
Beyond Send message Joined: 4 Oct 14 Posts: 36 Credit: 148,972,619 RAC: 89,750 |
Hi yoyo, I have a WU that's been running for 132 hours and has been at 100% for the last 2 days: http://yafu.myfirewall.org/yafu/workunit.php?wuid=227630 It's on a slow 2 core Ivy Bridge box, but the longest previous WU on this machine was 21.3 hours. Should I leave it run or is it dead? It's still using CPU, but looks like only 1 core. It seemed to run fairly normally at first, but then as the completion percentage got higher it went slower and slower. just before hitting 100% it was progressing at about 0.001% per 2 hours. Edit: Here's the output from a log file if that helps.: 11/13/14 22:13:02 v1.34.5 @ SUBA, 11/13/14 22:13:02 v1.34.5 @ SUBA, **************************** 11/13/14 22:13:02 v1.34.5 @ SUBA, Starting factorization of 47987412564198200109186934880570980526987301425001440780338916116510420217531104751755815639270422743607663313229559 11/13/14 22:13:02 v1.34.5 @ SUBA, using pretesting plan: normal 11/13/14 22:13:02 v1.34.5 @ SUBA, no tune info: using qs/gnfs crossover of 95 digits 11/13/14 22:13:02 v1.34.5 @ SUBA, **************************** 11/13/14 22:13:02 v1.34.5 @ SUBA, rho: x^2 + 3, starting 1000 iterations on C116 11/13/14 22:13:02 v1.34.5 @ SUBA, rho: x^2 + 2, starting 1000 iterations on C116 11/13/14 22:13:02 v1.34.5 @ SUBA, rho: x^2 + 1, starting 1000 iterations on C116 11/13/14 22:13:02 v1.34.5 @ SUBA, pm1: starting B1 = 150K, B2 = gmp-ecm default on C116 11/13/14 22:13:02 v1.34.5 @ SUBA, current ECM pretesting depth: 0.00 11/13/14 22:13:02 v1.34.5 @ SUBA, scheduled 30 curves at B1=2000 toward target pretesting depth of 35.69 11/13/14 22:13:02 v1.34.5 @ SUBA, Finished 30 curves using Lenstra ECM method on C116 input, B1=2K, B2=gmp-ecm default 11/13/14 22:13:02 v1.34.5 @ SUBA, current ECM pretesting depth: 15.18 11/13/14 22:13:02 v1.34.5 @ SUBA, scheduled 74 curves at B1=11000 toward target pretesting depth of 35.69 11/13/14 22:13:10 v1.34.5 @ SUBA, Finished 74 curves using Lenstra ECM method on C116 input, B1=11K, B2=gmp-ecm default 11/13/14 22:13:10 v1.34.5 @ SUBA, current ECM pretesting depth: 20.24 11/13/14 22:13:10 v1.34.5 @ SUBA, scheduled 214 curves at B1=50000 toward target pretesting depth of 35.69 11/13/14 22:14:23 v1.34.5 @ SUBA, Finished 214 curves using Lenstra ECM method on C116 input, B1=50K, B2=gmp-ecm default 11/13/14 22:14:23 v1.34.5 @ SUBA, pm1: starting B1 = 3750K, B2 = gmp-ecm default on C116 11/13/14 22:14:26 v1.34.5 @ SUBA, current ECM pretesting depth: 25.33 11/13/14 22:14:26 v1.34.5 @ SUBA, scheduled 430 curves at B1=250000 toward target pretesting depth of 35.69 11/13/14 22:22:14 v1.34.5 @ SUBA, Finished 430 curves using Lenstra ECM method on C116 input, B1=250K, B2=gmp-ecm default 11/13/14 22:22:14 v1.34.5 @ SUBA, pm1: starting B1 = 15M, B2 = gmp-ecm default on C116 11/13/14 22:22:25 v1.34.5 @ SUBA, current ECM pretesting depth: 30.45 11/13/14 22:22:25 v1.34.5 @ SUBA, scheduled 904 curves at B1=1000000 toward target pretesting depth of 35.69 11/13/14 23:24:54 v1.34.5 @ SUBA, Finished 904 curves using Lenstra ECM method on C116 input, B1=1M, B2=gmp-ecm default 11/13/14 23:24:54 v1.34.5 @ SUBA, current ECM pretesting depth: 35.56 11/13/14 23:24:54 v1.34.5 @ SUBA, scheduled 66 curves at B1=3000000 toward target pretesting depth of 35.69 11/13/14 23:37:18 v1.34.5 @ SUBA, Finished 66 curves using Lenstra ECM method on C116 input, B1=3M, B2=gmp-ecm default 11/13/14 23:37:18 v1.34.5 @ SUBA, final ECM pretested depth: 35.70 11/13/14 23:37:18 v1.34.5 @ SUBA, scheduler: switching to sieve method 11/13/14 23:37:18 v1.34.5 @ SUBA, nfs: commencing nfs on c116: 47987412564198200109186934880570980526987301425001440780338916116510420217531104751755815639270422743607663313229559 11/13/14 23:37:18 v1.34.5 @ SUBA, nfs: commencing poly selection with 2 threads 11/13/14 23:37:18 v1.34.5 @ SUBA, nfs: setting deadline of 4050 seconds 11/14/14 00:44:48 v1.34.5 @ SUBA, nfs: completed 54 ranges of size 250 in 4050.3227 seconds 11/14/14 00:44:48 v1.34.5 @ SUBA, nfs: best poly = # norm 5.194789e-011 alpha -5.928782 e 5.000e-010 rroots 3 11/14/14 00:44:48 v1.34.5 @ SUBA, nfs: commencing lattice sieving with 2 threads 11/14/14 01:31:26 v1.34.5 @ SUBA, nfs: commencing lattice sieving with 2 threads 11/14/14 02:16:38 v1.34.5 @ SUBA, nfs: commencing lattice sieving with 2 threads 11/14/14 02:59:16 v1.34.5 @ SUBA, nfs: commencing lattice sieving with 2 threads |
yoyo_rkn Volunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message Joined: 22 Aug 11 Posts: 736 Credit: 17,612,101 RAC: 38 |
The log looks fine. But it stops on 14. November. So from the log it seems the wu has stopped. Don't trust the progress percentage. This is not from the application. This is some assumption of Boinc and boinctasks. Please check the slot directory which files were recently changed. There should be a nfs.dat file which goes up to 1 GB. yoyo |
Beyond Send message Joined: 4 Oct 14 Posts: 36 Credit: 148,972,619 RAC: 89,750 |
The nfs.dat in the slot directory is only 116,000K and hasn't been updated since 11/14. I suppose it's dead. I was assuming that it was still running because of the BOINC progress percentage and since the Yafu task still shows as using 50% of the CPU in task manager. I just paused the WU. Do you want the nfs.dat sent to you? |
yoyo_rkn Volunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message Joined: 22 Aug 11 Posts: 736 Credit: 17,612,101 RAC: 38 |
No need to send it. Just abort the wu. yoyo |
Beyond Send message Joined: 4 Oct 14 Posts: 36 Credit: 148,972,619 RAC: 89,750 |
Aborted. Thanks yoyo! |
Beyond Send message Joined: 4 Oct 14 Posts: 36 Credit: 148,972,619 RAC: 89,750 |
Now I have one on a different machine that's been sitting at 100% for hours. It's till using 100% CPU on 2 cores. The only files updating since yesterday are .last_spq0 and .last_spq1 which update every minute. Everything else in the slot directory is about 20 hours old. Kill it or keep running it? |
yoyo_rkn Volunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message Joined: 22 Aug 11 Posts: 736 Credit: 17,612,101 RAC: 38 |
Which process is consuming CPU? I would say let it run. |
Beyond Send message Joined: 4 Oct 14 Posts: 36 Credit: 148,972,619 RAC: 89,750 |
I paused it to let 2 other yafu WUs run that were approaching deadline. Restarted it a few hours ago and just now it aborted with a computation error (196): 11-22-14 08:04 Aborting task yafu_C101_F1416456605_312_0: exceeded disk limit From Stderr: <core_client_version>7.4.27</core_client_version> <![CDATA[ <message> Maximum disk usage exceeded </message> <stderr_txt> al yield: 6034233, q=15083179 (0.00956 sec/rel) http://yafu.myfirewall.org/yafu/result.php?resultid=237720 |
yoyo_rkn Volunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message Joined: 22 Aug 11 Posts: 736 Credit: 17,612,101 RAC: 38 |
Any Idea what file did the disk limit exeed? |
Beyond Send message Joined: 4 Oct 14 Posts: 36 Credit: 148,972,619 RAC: 89,750 |
No, when I saw it had aborted I looked in the slot dir and it was empty. The disk has 34GB free. I looked at the slot dir about an hour earlier and still the only files updating were the 2 mentioned above. On all my other Yafu WUs there are a number of files that update perhaps every 10 minutes. Also. what's the maximum ram the yafu app uses? |
yoyo_rkn Volunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message Joined: 22 Aug 11 Posts: 736 Credit: 17,612,101 RAC: 38 |
The settings for the wus are: Max Memory Usage 500 000 000 Max Disk Usage 4000 000 000 This was never reached for C116. I only saw the nfs.dat in the slot folder grow up tu 1 GB. |
Beyond Send message Joined: 4 Oct 14 Posts: 36 Credit: 148,972,619 RAC: 89,750 |
Creditnew is bizarre. Three identical machines, scoring for same length WUs as much as 8x difference, actually sometimes more. Have no idea why. |
yoyo_rkn Volunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message Joined: 22 Aug 11 Posts: 736 Credit: 17,612,101 RAC: 38 |
Same length of a wu, e.g. C116, doesn't mean that the runtime is the same. For one WU it might happen, that factors are found direct aft start. For another wu it might be that the checking for known prime factores and trial factoring with ecm doesn't find anything and a complete nfs must be done. So runtime can differ from some seconds to some hours, for the same number length. yoyo |