Grades and feedback assignment 02
The marks for assignment 2 are online. Most of you managed to complete the assignment successfully, even if some corners had to be cut.
I hope you enjoyed this assignment, and I also hope that you got a feeling for some of the challenges in crunching big datasets.
Please email or send a Discord DM to Ken (kenohori#5365) if you want personal feedback. That also applies to those who said they wanted to discuss certain issues in their report.
Most common issues/comments
- nice small things some people did: output of runtime/progress, smart ways to avoid duplicate vertices/faces
- very impressed with some of your solutions, in particular:
- implementations using parallel processing
- sparse representations with hashing
- own approaches to avoid the intersections entirely (even if the result wasn’t perfect)
- in order to achieve good performance, it is crucial to:
- choose an appropriate data structure
- minimise the number of intersection tests (eg through iteration orders, nested conditions, bounding box approximations, etc)
- know which operations are fast on which data structures
- many of you didn’t include the MTL file with your OBJ file (no penalty)
How it was marked
First, I read your report and tried to evaluate your methodology and the quality of your results from there. If something was unclear, I fell back on your code and/or voxel model.
Finally, I ran your code with the Leiden dataset available here, which is also made with 3dfier. For all but three teams, the runtime of your code with this dataset ranged from 29 seconds to 16 minutes (3 minutes on average). For the remaining three teams, it was taking too long so I had to stop it after an hour or two. Extrapolating from what was completed, my guess is that they would have been finished in between 8 hours and about a day.
Followed all rules / runs without modifications
- 1 - all good
- 0.5 - minor issues (eg breaking some rules)
- 0 - found significant issues (eg had to fix code myself)
Voxel model
- 3 - complex model with small voxels (<= 5m) and no obvious issues
- 2 - less complex model (10m) / minor issues (eg objects/materials handling)
- 1 - cutting a lot of corners to make it work (eg very large voxels, cropped model, major issues)
Report
In the report, I was mostly looking for:
- a clear high-level description of your whole methodology
- technical descriptions of how you solved the key issues in your approach (eg finding the grid domain, data structures used, optimal good iteration order, performing intersection tests, generating the OBJ output without duplicates)
- well-justified rationale for your approach
- an honest and insightful evaluation of your own work (pros and cons)
Not a must-do but very appreciated:
- evaluation of performance / scaling depending on size of dataset
- big O analysis
- explanations of smart optimisations
- high-quality renders of models
- diagrams to explain methodology
Thus:
- 3 - excellent report
- 2 - okay report (not very clear, missing some elements)
- 1 - report with problems (many elements missing)
Implementation
- 3 - everything works as expected using another file
- 2 - minor issues (eg objects/materials), but generally okay
- 1 - basics work, but large parts are broken
Extra
- 1 extra point for outstanding work (eg creative solutions, parallel processing, sparse representations)