EDUX is a user-friendly library for solving problems with a machine learning approach.
EDUX supports a variety of machine learning algorithms including:
- Multilayer Perceptron (Neural Network): Suitable for regression and classification problems, MLPs can approximate non-linear functions.
- K Nearest Neighbors: A simple, instance-based learning algorithm used for classification and regression.
- Decision Tree: Offers visual and explicitly laid out decision making based on input features.
- Support Vector Machine: Effective for binary classification, and can be adapted for multi-class problems.
- RandomForest: An ensemble method providing high accuracy through building multiple decision trees.
Edux supports a variety of image augmentations, which can be used to increase the performance of your model.
![Original Image](https://private-user-images.githubusercontent.com/6922428/282043126-01d5a67c-0a62-4884-a2cc-7b0be1ee4601.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg4OTg0OTEsIm5iZiI6MTczODg5ODE5MSwicGF0aCI6Ii82OTIyNDI4LzI4MjA0MzEyNi0wMWQ1YTY3Yy0wYTYyLTQ4ODQtYTJjYy03YjBiZTFlZTQ2MDEucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIwNyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMDdUMDMxNjMxWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9Yjc3MzY1NmUwYWIzNDA5MWI3NWVmODM3MGJiNGY3NTJhNWQxNjBiYmJiYjYwNjFiZDg1YjdkZWY2MjdmNDExOCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.3h7eHBlP2BGgGwJ6qWLCaPanZIc5smiMOVcGFSz75fM)
![Color Equalized Image](https://private-user-images.githubusercontent.com/6922428/282042990-a3b04e8a-85c7-4bf3-8f76-9f8ce330e304.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg4OTg0OTEsIm5iZiI6MTczODg5ODE5MSwicGF0aCI6Ii82OTIyNDI4LzI4MjA0Mjk5MC1hM2IwNGU4YS04NWM3LTRiZjMtOGY3Ni05ZjhjZTMzMGUzMDQucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIwNyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMDdUMDMxNjMxWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9MTc0N2M5ZmRkNWZkNjc1NDUyOWQwODJiNWU0YWQ3NjE0OTliNDYxMGMxNjdmYzFiYjcyMjdkNWI1OTQ1NWZhMSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.P2XvHOZETno6qaygEbO0soDG8bF46fbAKW0X3OHJY3Q)
![Original Image](https://private-user-images.githubusercontent.com/6922428/282043124-56c4f7a4-93dc-483c-b5da-c8a15989b313.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg4OTg0OTEsIm5iZiI6MTczODg5ODE5MSwicGF0aCI6Ii82OTIyNDI4LzI4MjA0MzEyNC01NmM0ZjdhNC05M2RjLTQ4M2MtYjVkYS1jOGExNTk4OWIzMTMucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIwNyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMDdUMDMxNjMxWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NjBjMzUzZGQ1M2Q4ZDM0YzVmYmQxMjRlZmY5ZTQ0NTg3NGU2OThmY2QyYWY0NjZlNjU4NjE2NmI0MDlmZmE4NiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.fn0avOUgHT4PhQzK7-Xd4-RARJz5_-HXyiho739NoQU)
![Monochrome + Noise Image](https://private-user-images.githubusercontent.com/6922428/282044331-25a8b2e5-0373-4781-8001-114e699fc2fe.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg4OTg0OTEsIm5iZiI6MTczODg5ODE5MSwicGF0aCI6Ii82OTIyNDI4LzI4MjA0NDMzMS0yNWE4YjJlNS0wMzczLTQ3ODEtODAwMS0xMTRlNjk5ZmMyZmUucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIwNyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMDdUMDMxNjMxWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ZWEyYmQ1ZTFkMDE5NWU3N2YwZmYzYzM0NjIzNDEwMWEwNDI2ZThmNzA3NTk4MDk5NzE2NzU3MjBmMjM3M2ZjYSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.jcpREIySCJv2-NnM79ACNIYcsqmQXSK1XQR3W3TGlls)
AugmentationSequence augmentationSequence=
new AugmentationBuilder()
.addAugmentation(new ResizeAugmentation(250,250))
.addAugmentation(new ColorEqualizationAugmentation())
.build();
BufferedImage augmentedImage=augmentationSequence.applyTo(image);
AugmentationSequence augmentationSequence=
new AugmentationBuilder()
.addAugmentation(new ResizeAugmentation(250,250))
.addAugmentation(new ColorEqualizationAugmentation())
.addAugmentation(new BlurAugmentation(25))
.addAugmentation(new RandomDeleteAugmentation(10,20,20))
.build()
.run(trainImagesDir,numberOfWorkers,outputDir);
We run all algorithms on the same dataset and compare the results. Benchmark
The main goal of this project is to create a user-friendly library for solving problems using a machine learning approach. The library is designed to be easy to use, enabling the solution of problems with just a few lines of code.
The library currently supports:
- Multilayer Perceptron (Neural Network)
- K Nearest Neighbors
- Decision Tree
- Support Vector Machine
- RandomForest
Include the library as a dependency in your Java project file.
implementation 'io.github.samyssmile:edux:1.0.7'
<dependency>
<groupId>io.github.samyssmile</groupId>
<artifactId>edux</artifactId>
<version>1.0.7</version>
</dependency>
EDUX supports Nvidia GPU acceleration.
- Nvidia GPU with CUDA support
- CUDA Toolkit 11.8
This section guides you through using EDUX to process your dataset, configure a multilayer perceptron (Multilayer Neural Network), perform training and evaluation.
A multi-layer perceptron (MLP) is a feedforward artificial neural network that generates a set of outputs from a set of input features. An MLP is characterized by several layers of input nodes connected as a directed graph between the input and output layers.
In this example we use the famouse MNIST Dataset. The MNIST database contains 60,000 training images and 10,000 testing
String trainImages = "train-images.idx3-ubyte";
String trainLabels = "train-labels.idx1-ubyte";
String testImages = "t10k-images.idx3-ubyte";
String testLabels = "t10k-labels.idx1-ubyte";
Loader trainLoader = new ImageLoader(trainImages, trainLabels, batchSize);
Loader testLoader = new ImageLoader(testImages, testLabels, batchSize);
int batchSize = 100;
int threads = 1;
int epochs = 10;
float initialLearningRate = 0.1f;
float finalLearningRate = 0.001f;
MetaData trainMetaData = trainLoader.open();
int inputSize = trainMetaData.getInputSize();
int outputSize = trainMetaData.getExpectedSize();
trainLoader.close();
We use the NetworkBuilder Class
new NetworkBuilder()
.addLayer(new DenseLayer(inputSize, 32)) //32 Neurons as output size
.addLayer(new ReLuLayer())
.addLayer(new DenseLayer(32, outputSize)) //32 Neurons as input size
.addLayer(new SoftmaxLayer())
.withBatchSize(batchSize)
.withLearningRates(initialLearningRate, finalLearningRate)
.withExecutionMode(singleThread)
.withEpochs(epochs)
.build()
.printArchitecture()
.fit(trainLoader, testLoader)
.saveModel("model.edux"); // Save the trained model
Load 'model.edux' and continue training for 10 epochs.
NeuralNetwork nn =
new NetworkBuilder().withEpochs(10).loadModel("model.edux").fit(trainLoader, testLoader);
........................Epoch: 1, Loss: 1,14, Accuracy: 91,04
...
........................Epoch: 10, Loss: 0,13, Accuracy: 96,16
You can find more fully working examples for all algorithms in the examples folder.
For examples we use the
Contributions are warmly welcomed! If you find a bug, please create an issue with a detailed description of the problem. If you wish to suggest an improvement or fix a bug, please make a pull request. Also checkout the Rules and Guidelines page for more information.