Menu
Logged-In As
ACCOUNTNot Logged In
Improve the superellipsoid surface area implementationBRL-CAD
Status: ClosedTime to complete:
120 hrs
Mentors: Sean
This task is a follow-on of http://www.google-melange.com/gci/task/view/google/gci2013/5486014072094720
The objective of this task is to improve the surface area implementation by addressing the issues identified in the comments as well as attempting to improve performance.
Submit a patch that improves the existing implementation.
Uploaded Work
File name/URL | File size | Date submitted | |
---|---|---|---|
superell_surface_area_improvements.tar.gz | 50.1 KB | January 03 2014 13:54 UTC |
I would like to work on this task.
This task has been assigned to Andromeda Galaxy. You have 120 hours to complete this task, good luck!
The work on this task is ready to be reviewed.
Congratulations, this task has been completed successfully.
Sorry, I forgot to post these numbers in the explanation file: 10 runs of the function that start and end at a precision of 4096 took these amounts of time for slow (original) and fast (new) implementations:
FAST: 2m2.971s
SLOW: 5m39.673s
I had one more idea for another performance improvement after I submitted this task; if you can open another task for that, I can do it as well, or I could do it after Code-In is complete.
Andromeda, this looks good but you did introduce a new constant (500000) without documenting it. Also, I note that your documenting of the other constants (both 64 before and 1024 now, and the multiplied by 4 or 2) are inane... :) Your explanation for both is the same, which means it was useless for 64 and is probably still useless for 1024. All you basically said is "this is what seemed to work, I guess" even though they're two orders of magnitude different!
Also the fact that you wrote "A significant speedup of superell_surf_area_general could make larger values more practical." implies these values are completely tied to how fast your hardware is, which they shouldn't be.
The constant values should be picked based on some metric of quality, but NOT overall performance or time. That is to say, basing it on quality would be reducing computed area error to less than a given value. Basing it on time would be like adding a timer and refining the estimate until too much time has elapsed, but you're not doing that (nor would that be a great idea).