This document describes the operations that are being done on images, and prepares the inputs required to run 3D reconstructions.
- Image capturing: multi shot averaging (for SNR enhancement), and for several exposure times (for HDR).
- Camera prepossessing: dark image subtraction, normalization (according to fisheye model),
HDR calculation, and vignetting correction.
Note: This function is called from seekImageArray(), and handle_array(), that means that the saved images are before prepossessing, and one needs to call one of these in order to apply the prepossessing.
-
2D grabcut:
-
Choose under
Arrays
tab:view settings
->space carving
. -
It creates a cloud mask based on dilation operation (Applied by a maximum filter).
-
This operation is done in ArrayModel._update_cloud_weights()
-
-
2D sunshader:
-
choose under
Arrays
tab:view settings
->sun shader
. -
It creates a sunshader mask based on erosion operation Since the mask is an inversion of the sunshader.)
-
This is done in calcSunshaderMask().
-
-
2D ROI:
-
Choose under
Arrays
tab:view settings
->Widgets
->show ROI
. -
This is a rectangular mask that determines what is the relevant area in the image that observes on the volume of interest. Currently, it is manually set.
-
Choosing the option
Show Grid
presents the inspected volume (on the map shown as a blue cube) and it's voxel as grid projection (red scatter plot on each of the images), thus helps to set the ROI. -
The ROI can be uploaded from earlier sessions or saved (as .pkl file).
-
-
2D and 3D space carving:
- This operation is done in Map3dModel.do_space_carving()
Finally, exportData() saves space_carve.pkl and call export to shdom.
exportToShdom(): includes final masking of ROI and sunshader, and it saves the sun mask separately.
- what is the difference between a mask that is saved to the space_carve.pkl?
-
In the class image_analysis, there exist the following objects: "ROI" object is based on a generic region-of-interest widget. The projected grid, "grid_scatter" is of ScatterPlotItem. The "mask_ROI" is of pg.PolyLineROI. What is the relation between ROI and ROI_mask? Which of the objects is being used in the final mask calculation of the image?
-
When drawing the camera there is an option to add the drow of "roi_mesh", the 3D projection of cameras' ROIs. Currently, it is not visually clear and it seems that these objects are not perfectly calculated on the 2D ROIs. It requires a farther investigation. E.g. how and when the 3D mesh is calculated?
Also, maybe we need to update the mesh visualization of ROI in 3D. -
TODO Find a way to calculate the ROI automatically based on the grid projection?
This process is done according to sun position and sunshader. The process should apply for a sunny day having clear sky, and requires two steps:
-
-
calculates sun position on the image plane (
measured_positions
), sun shader angle, the color value, and then save all tosun_positions.csv
(under sun_positions folder). -
This process is done every ~6 min.
-
-
-
loads
measured_positions
fromsun_positions.csv
and calculates themeasured_directions
according to the fisheye model (undistortion) on a unit sphere. -
The fisheye model is pre-determined during intrinsic calibration process.
-
Using the measurements times in
sun_positions.csv
and ephem, the function calculates sun directionscalculated_directions
. And then estimates camera orientation, by doing fit ofmeasured_directions
tocalculated_directions
. -
This process gives as well the rotation matrix R (camera-to-world transform ).
-
To apply the extrinsic calibration from the GUI:
severs
->choose camera
->Exrinsic
tab ->extrinsic calibrate
- Also saves the extrinsic_data.npy in camera folder.
To save all cameras extrinsic calibration:
-
Arrays
-->Save Extrinsic
-
Saves in a specific day of captured_images folder.
Graphical Result:
- See a mathematical description, see p.49-50 in Amit's thesis, section: "Self-Calibration in a Camera Network".
To perform radiometric calibration with a sunphotometer, the camera should be close to the sunphotometer, and the measurements should be done in a clear sky day.
To get the sunphotometer measurements:
-
Download files from NASA's AERONET site.
-
All the current files can be found under
.../data/aeronet
. -
Some manipulation, such as deleting first rows, might be needed for new data.
-
The meaning of numbers and measurements can be found here.
-
-
The function handle_radiometric():
-
reads the sunphotometer measurements according to 3 channels at the requested day and hour.
-
Then estimates the location of the pixel on the image plane corresponding to Almucantar measurement angles.
-
Then the radiometric fit is estimated between sunphotometer measurements to camera samples.
-
-
The radiometric results are saved to radiometric.pkl under the camera's folder.
For example:
- What are the final conversion units?
- What inputs/changes are required for a new experiment?
The geographic coordinate systems that are used here are:
- The ECEF (earth-centered, earth-fixed frame) is the common 3D grid that is being used for moving the point-of-view (the observing camera) around the grid conveniently according to cameras' location (Latitude (\phi), longitude (\lambda),X_ecef,Y_ecef,Z_ecef).
See their definitions in the project here. image of the relation between both coordinates systems:
In cameranetwork, the NED (North East Down) grid (X,Y,-Z) convention, is used for visualization and reconstruction grid.
See their definitions in the project here.
There are several conversion processes that are being done:
- ProjectGrid() - Projecting the 3D grid of the interest volume, onto image plane. Which uses ecef2ned in projectECEF().
The 3D NED grid is of size 12 X 12 X 10 [km^3], having 81 X 81 X 121 voxels, each voxel size is of 150 X 150 X 100 [m^3].
The 3D grid is projected to a 2D grid on the image plane, shown as red dots on image plane).
This is done when choosing:
View settings
-->Widgets
-->show grid
.
This method is also being used when computing the space carve score per each view.
This is done when choosing in the map view Space carving
--> Update
--> Show space carving
.
Another usage of this method is when applying Update LOS. Ths function converts the also uses LOS of a single image to the epipolar lines on all other images.
To change the line of sight, simply press inside any one of the images. The others should update automatically.
The change should also be visible in the map if show LOS
has been enabled.
The 3D grid and ROI can also be seen in the map.
- The main process do_space_carving() of the 3D grid, calls visual hall calculation, using a process pool. calc_visual_hull() runs space carve per each view separately, and collects their results.
(// TODOL add snapshot and explanation here.).
space_carve_view(), projects the the 3D grid onto the corresponding image plane.
This process is done according to the number of perturbations chosen by the user. The final grid_score
is the mean value throughout the perturbations, for the voxels that are seen from this view.
// TODOL add snapshot and explanation here.).
The grid score
is based on the cloud_weights
. This is done at _update_cloud_weights(), on the 2D image plane (see section Computation of cloud scores maps, p.23, Amit's Aides thesis).
Following that, calc_visual_hull(), collects scores from all servers/ subprocesses.
- Calculate pixels phase function.?
- Intrinsic calibration.