Difference between revisions of "User:Jin Ke"

From BRL-CAD
(Development Logs)
(Development Logs)
Line 93: Line 93:
 
==Monday, July 15st, 2024==
 
==Monday, July 15st, 2024==
 
I am thinking aboud rendering loss func. Actually I think the function is not differentiable, which is probably the main reason why the training is not good.
 
I am thinking aboud rendering loss func. Actually I think the function is not differentiable, which is probably the main reason why the training is not good.
 +
 +
==Tuesday, July 16-18st, 2024==
 +
I spent a lot of time reading papers on NERF:
 +
[VIINTER: View Interpolation with Implicit Neural Representations of Images](https://arxiv.org/pdf/2211.00722)
 +
[Direct Voxel Grid Optimization:  Super-fast Convergence for Radiance Fields Reconstruction](https://arxiv.org/pdf/2111.11215)
 +
[NeRF++: Analyzing and improving neural radiance fields](https://arxiv.org/pdf/2010.07492)
 +
etc...

Revision as of 11:02, 18 July 2024

Development Logs

Community Bonding Period


Work Period

Monday, June 10th, 2024

  • Updated the CMake file to add a new project, rt_trainneural.
  • Added the file rt_trainer.cpp, which is prepared for the sampling method.

Tuesday, June 11th, 2024

  • A function was written using rt_raytrace to collect receipts with the output r,g,b

Wednesday, June 12th, 2024

  • add a_hit() and a_miss() function


Monday, June 24th, 2024

  • finish plotting points with a mged script. I create a mged automately:
in point1.s sph 1 1 1 0.1
in point2.s sph 1 2 1 0.1
in point3.s sph 1 1 2 0.1
r all_points.g u point1.s u point2.s u point3.s B all_points.g

Tuesday, June 25th, 2024

Add sample methods:

RayParam SampleRandom(size_t num);
RayParam SampleSphere(size_t num);
RayParam UniformSphere(size_t num);

Wednesday, June 26th, 2024

Finish store res as json file,like this: {

       "dir": [
           0.46341427145648684,
           -0.6747185194294957,
           -0.5744581207967407
       ],
       "point": [
           -30.10931324293444,
           72.95779116737057,
           81.61656328468132
       ],
       "rgb": [
           173,
           89,
           174
       ]
   }

}

Sunday, June 30th, 2024

Finish my first nerual rendering and genertae a picture.

Monday, July 1st, 2024

convert coordinate data to spherical data. create dataset and related methods.

Tuesday, July 2nd, 2024

There are some errors in coordinate convert, I fixed them all

Wednesday, July 3rd, 2024

Write a deep resnet to train, it doesn't work

Thursday, July 4th, 2024

Try to improve resnet performance by modifying hyperparameters

Friday, July 5th, 2024

Get a great result with resnet

Monday, July 8st, 2024

finish grid net with fixed dir, the result was not so bad

Tuesday, July 9nd, 2024

finish grid net with any dir, the result was too bad

Wednesday, July 10th, 2024

I use a 4-dimensional network which uses both dir and pos to train, it is very hard to train

Tuesday, July 11st, 2024

get some result with a 128*128*96*96*3 grid net, it doesn't work well.

Friday, July 12nd, 2024

Read some paper about Nerf. I mentioned this:NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, they used a A100 to train and it costs more than two days.

Monday, July 15st, 2024

I am thinking aboud rendering loss func. Actually I think the function is not differentiable, which is probably the main reason why the training is not good.

Tuesday, July 16-18st, 2024

I spent a lot of time reading papers on NERF: [VIINTER: View Interpolation with Implicit Neural Representations of Images](https://arxiv.org/pdf/2211.00722) [Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction](https://arxiv.org/pdf/2111.11215) [NeRF++: Analyzing and improving neural radiance fields](https://arxiv.org/pdf/2010.07492) etc...