Skip to content

Latest commit

 

History

History
214 lines (132 loc) · 12.5 KB

Image_pipeline.md

File metadata and controls

214 lines (132 loc) · 12.5 KB

Image pipeline of cameranetwork

This document describes the operations that are being done on images, and prepares the inputs required to run 3D reconstructions.

Prepossessing on board:

  1. Image capturing: multi shot averaging (for SNR enhancement), and for several exposure times (for HDR).
  2. Camera prepossessing: dark image subtraction, normalization (according to fisheye model), HDR calculation, and vignetting correction.

    Note: This function is called from seekImageArray(), and handle_array(), that means that the saved images are before prepossessing, and one needs to call one of these in order to apply the prepossessing.

Prepossessing using GUI:

1. Masking & Space carving:

  1. 2D grabcut:

    • Choose under Arrays tab: view settings -> space carving.

    • It creates a cloud mask based on dilation operation (Applied by a maximum filter).

    • This operation is done in ArrayModel._update_cloud_weights()

  2. 2D sunshader:

    • choose under Arrays tab: view settings -> sun shader.

    • It creates a sunshader mask based on erosion operation Since the mask is an inversion of the sunshader.)

    • This is done in calcSunshaderMask().

  3. 2D ROI:

    • Choose under Arrays tab: view settings -> Widgets -> show ROI.

    • This is a rectangular mask that determines what is the relevant area in the image that observes on the volume of interest. Currently, it is manually set.

    • Choosing the option Show Grid presents the inspected volume (on the map shown as a blue cube) and it's voxel as grid projection (red scatter plot on each of the images), thus helps to set the ROI.

    • The ROI can be uploaded from earlier sessions or saved (as .pkl file).

  4. 2D and 3D space carving:

Finally, exportData() saves space_carve.pkl and call export to shdom.

exportToShdom(): includes final masking of ROI and sunshader, and it saves the sun mask separately.

Questions regarding space carving:
  1. what is the difference between a mask that is saved to the space_carve.pkl?

Notes and questions regarding ROI:

  1. In the class image_analysis, there exist the following objects: "ROI" object is based on a generic region-of-interest widget. The projected grid, "grid_scatter" is of ScatterPlotItem. The "mask_ROI" is of pg.PolyLineROI. What is the relation between ROI and ROI_mask? Which of the objects is being used in the final mask calculation of the image?

  2. When drawing the camera there is an option to add the drow of "roi_mesh", the 3D projection of cameras' ROIs. Currently, it is not visually clear and it seems that these objects are not perfectly calculated on the 2D ROIs. It requires a farther investigation. E.g. how and when the 3D mesh is calculated?
    Also, maybe we need to update the mesh visualization of ROI in 3D.

  3. TODO Find a way to calculate the ROI automatically based on the grid projection?

2. Extrinsic calibration

This process is done according to sun position and sunshader. The process should apply for a sunny day having clear sky, and requires two steps:

  1. handle_sunshader_scan()

    • calculates sun position on the image plane (measured_positions), sun shader angle, the color value, and then save all to sun_positions.csv(under sun_positions folder).

    • This process is done every ~6 min.

  2. handle_extrinsic() -

    • loads measured_positions from sun_positions.csv and calculates the measured_directions according to the fisheye model (undistortion) on a unit sphere.

    • The fisheye model is pre-determined during intrinsic calibration process.

    • Using the measurements times in sun_positions.csv and ephem, the function calculates sun directions calculated_directions. And then estimates camera orientation, by doing fit of measured_directions to calculated_directions.

    • This process gives as well the rotation matrix R (camera-to-world transform ).

To apply the extrinsic calibration from the GUI:

  • severs -> choose camera -> Exrinsic tab -> extrinsic calibrate
  • Also saves the extrinsic_data.npy in camera folder.

To save all cameras extrinsic calibration:

  • Arrays --> Save Extrinsic

  • Saves in a specific day of captured_images folder.

Graphical Result:

extrinsic

  • See a mathematical description, see p.49-50 in Amit's thesis, section: "Self-Calibration in a Camera Network".

3. Radiometric calibration:

To perform radiometric calibration with a sunphotometer, the camera should be close to the sunphotometer, and the measurements should be done in a clear sky day.

To get the sunphotometer measurements:

  1. Download files from NASA's AERONET site.

    • All the current files can be found under .../data/aeronet.

    • Some manipulation, such as deleting first rows, might be needed for new data.

    • The meaning of numbers and measurements can be found here.

    • Specifically: irradiance sunphotometer units are formula.

  2. The function handle_radiometric():

    • reads the sunphotometer measurements according to 3 channels at the requested day and hour.

    • Then estimates the location of the pixel on the image plane corresponding to Almucantar measurement angles.

    • Then the radiometric fit is estimated between sunphotometer measurements to camera samples.

  3. The radiometric results are saved to radiometric.pkl under the camera's folder.

For example:

Radiometric

Questions regarding radiometric calibration:
  1. What are the final conversion units?
  2. What inputs/changes are required for a new experiment?

4. 3D grid and space curving:

The geographic coordinate systems that are used here are:

  1. The ECEF (earth-centered, earth-fixed frame) is the common 3D grid that is being used for moving the point-of-view (the observing camera) around the grid conveniently according to cameras' location (Latitude (\phi), longitude (\lambda),X_ecef,Y_ecef,Z_ecef).

ECEF

  1. Local tangent plane coordinates (LTP). The NEU (North East Up) uses grid (X,Y,Z) as follows: ECEF2ENU

See their definitions in the project here. image of the relation between both coordinates systems:

ECEF_ENU

In cameranetwork, the NED (North East Down) grid (X,Y,-Z) convention, is used for visualization and reconstruction grid.

See their definitions in the project here.

There are several conversion processes that are being done:

  1. ProjectGrid() - Projecting the 3D grid of the interest volume, onto image plane. Which uses ecef2ned in projectECEF(). The 3D NED grid is of size 12 X 12 X 10 [km^3], having 81 X 81 X 121 voxels, each voxel size is of 150 X 150 X 100 [m^3]. The 3D grid is projected to a 2D grid on the image plane, shown as red dots on image plane). This is done when choosing: View settings-->Widgets--> show grid.

roi_grid_los_and_settings

This method is also being used when computing the space carve score per each view. This is done when choosing in the map view Space carving--> Update --> Show space carving.

space_carving

Another usage of this method is when applying Update LOS. Ths function converts the also uses LOS of a single image to the epipolar lines on all other images.

camera_array-los1

To change the line of sight, simply press inside any one of the images. The others should update automatically.

los2

The change should also be visible in the map if show LOS has been enabled.

The 3D grid and ROI can also be seen in the map.

map_los_roi_grid

  1. The main process do_space_carving() of the 3D grid, calls visual hall calculation, using a process pool. calc_visual_hull() runs space carve per each view separately, and collects their results.

(// TODOL add snapshot and explanation here.).

space_carve_view(), projects the the 3D grid onto the corresponding image plane. This process is done according to the number of perturbations chosen by the user. The final grid_score is the mean value throughout the perturbations, for the voxels that are seen from this view.

// TODOL add snapshot and explanation here.).

The grid score is based on the cloud_weights. This is done at _update_cloud_weights(), on the 2D image plane (see section Computation of cloud scores maps, p.23, Amit's Aides thesis).

Following that, calc_visual_hull(), collects scores from all servers/ subprocesses.

TODO: Other issues to cover regarding image pipeline:

  1. Calculate pixels phase function.?
  2. Intrinsic calibration.