You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Proposal:
Hi, I have a multi-threading program that contain 80,000 field, the data changed ~10ms.
Is there any efficient method to write that amount of data to a local Influx DB?
Current behavior:
I have implemented the sample of batching write method that create multi-thread, each thread managed 5000 items and write items to db every 10ms.
using InfluxDB.Client;
using InfluxDB.Client.Api.Domain;
using InfluxDB.Client.Core;
using InfluxDB.Client.Writes;
using System.Diagnostics;
internal class Program
{
private const string Host = "http://localhost:8086";
private static string Username = "admin";
private const string Password = "password";
private const string Bucket = "PERFORMANCE";
private const string Organization = "ORG";
private static InfluxDBClient _client;
private static async Task Main(string[] args)
{
_client = CreateClient();
List<InfluxItem> influxItems = Enumerable.Range(1, 80_000).Select(x => new InfluxItem(x)).ToList();
List<InfluxBatch> influxBatches = new List<InfluxBatch>();
var batches = influxItems.Select((x, i) => new { Index = i, Value = x })
.GroupBy(x => x.Index / 5000)
.Select(x => x.Select(v => v.Value).ToList())
.ToList();
for (int i = 0; i < batches.Count; i++)
influxBatches.Add(new($"Batch{i:000}", batches[i], _client));
Parallel.ForEach(influxBatches, x => Task.Run(x.WriteLoop));
while (true)
{
influxBatches.ForEach(x => Console.WriteLine($"Batch {x.BatchName} elapsed: {x.ExecutionMillisecond:0.00}"));
await Task.Delay(10);
}
}
private static InfluxDBClient CreateClient()
{
var builder = InfluxDBClientOptions.Builder.CreateNew();
builder.Url(Host);
builder.Authenticate(Username, Password.ToCharArray());
//builder.AuthenticateToken(Token);
builder.Bucket(Bucket);
builder.Org(Organization);
builder.LogLevel(LogLevel.None);
builder.TimeOut(TimeSpan.FromSeconds(600));
return new(builder.Build());
}
}
public class InfluxItem
{
public string? BatchName { get; set; }
private readonly string _field;
private float _value;
private DateTime _timestamp;
public PointData InfluxPoint => PointData.Measurement("Test")
.Field(_field, _value)
.Tag("BatchName", BatchName)
.Timestamp(_timestamp, WritePrecision.Ns);
public InfluxItem(int fieldCount)
{
_field = $"Influx{fieldCount:000000}";
_value = 0;
}
public void UpdateValue()
{
_value += 0.1f;
_timestamp = DateTime.Now;
}
}
public class InfluxBatch
{
private readonly InfluxDBClient _client;
public string BatchName { get; set; }
public List<InfluxItem> InfluxItems { get; set; }
public double ExecutionMillisecond { get; set; }
public InfluxBatch(string batchName, List<InfluxItem> influxItems, InfluxDBClient client)
{
InfluxItems = influxItems;
_client = client;
BatchName = batchName;
InfluxItems.ForEach(x => x.BatchName = batchName);
}
public async Task WriteLoop()
{
var stopwatch = new Stopwatch();
while (true)
{
stopwatch.Restart();
InfluxItems.ForEach(x => x.UpdateValue());
var points = InfluxItems.Select(x => x.InfluxPoint).ToList();
await _client.GetWriteApiAsync().WritePointsAsync(points);
stopwatch.Stop();
ExecutionMillisecond = stopwatch.Elapsed.Milliseconds / 1000.0;
await Task.Delay(10);
}
}
}
The execution time of each thread to update and write data to db < 1ms. That mean the interval between each datapoint < 11ms. But when I check the update frequency from query, the gap time > 200ms.
|> elapsed(unit: 1ms)
Desired behavior:
Is there any efficient method to write large amount of data to influx DB without ?
Alternatives considered:
Server specs:
CPU: Intel Xeon Silver 4210 (20 CPUs)
Ram: 128GB
Database storage: Local disk, HDD: xTB
Use case:
This program gather the factory sensor data.
Thank you!
The text was updated successfully, but these errors were encountered:
ntdgo
changed the title
Solution to write large of points
Solution to write large amount of points
Sep 18, 2023
Nothing immediately stands out, beyond the fact that you are writing an enormous amount of data and fields every 10ms. That could be pushing things too far.
The execution time of each thread to update and write data to db < 1ms. That mean the interval between each datapoint < 11ms. But when I check the update frequency from query, the gap time > 200ms.
Given the large size of the data there might be delays in processing at the influxdb side. You might consider splitting up the data across writers, or if you want to narrow things down further instead of updating the data, read known data from a file in, and then once it is in memory, write that known data and do the same math.
Proposal:
Hi, I have a multi-threading program that contain 80,000 field, the data changed ~10ms.
Is there any efficient method to write that amount of data to a local Influx DB?
Current behavior:
I have implemented the sample of batching write method that create multi-thread, each thread managed 5000 items and write items to db every 10ms.
The execution time of each thread to update and write data to db < 1ms. That mean the interval between each datapoint < 11ms. But when I check the update frequency from query, the gap time > 200ms.
Desired behavior:
Is there any efficient method to write large amount of data to influx DB without ?
Alternatives considered:
Server specs:
CPU: Intel Xeon Silver 4210 (20 CPUs)
Ram: 128GB
Database storage: Local disk, HDD: xTB
Use case:
This program gather the factory sensor data.
Thank you!
The text was updated successfully, but these errors were encountered: