Implementation of common ML algorithms from scratch, and improving their runtime using HPX.
Pre-requisites: Installing HPX
cmake -S . -Bbuild -GNinja
cmake --build build
./build/linear_reg [path_to_dataset]
- Linear Regression GD, seq for-loop: Used a classic for loop for performing Gradient Descent (GD)
- Linear Regression GD, STL, seq: Using
std::transform_reduce
- Linear Regression GD, STL, par: Using
std::transform_reduce
withstd::execution::par
execution policy - Linear Regression GD, HPX, par: Using
hpx::transform_reduce
withhpx::execution::par
execution policy
Results with dataset containing 10000 points:
$ ./build/linear_reg datasets/linear_regressor_dataset_10000.csv
Final Parameters: W = 2.577991, B = -0.136139
Final Parameters: W = 2.514959, B = 3.057930
Final Parameters: W = 2.530195, B = 2.043939
Final Parameters: W = 2.523242, B = 2.506657
Results:
name: Sequential GD, for-loop,
executor: seq,
average: 0.72105570925
name: Linear Regression GD, STL, seq,
executor: seq ,
average: 0.961342297166667
name: Linear Regression, GD, STL, par,
executor: std::execution::par,
average: 0.945831261333333
name: Linear Regression, GD, HPX, par,
executor: hpx::execution::par,
average: 0.51301071275
Note: Execution time of each implementation is averaged for 10 runs