The model is being trained incrementally on the user’s system.
At first, only a fraction of the training data is used.
The SDK “listens” to clues regarding the training process.
Those clues are sent to a data selection engine that suggests a subset of data to use next.
The user indicates his/her requirements for data annotation.
The system recommends and routes the selected data to the best available labeling process (human or autolabeling).
The annotations are generated immediately. No waiting period.
The results are returned to the auditing module which identifies anomalies.
The data is re-routed to either the same or another labeling company or process for fixing.
You can also choose to fix the labels yourself.
Labeling partners are penalized if they make too many mistakes, which improves future recommendations.
Each batch of labels can be visualized and analyzed on the auditing module and the labeling dashboard.
The labels are versioned. The labels are sent back to the SDK to resume the training process with the optimal data.
Training continues until the model reaches the required accuracy, the budget is fully spent, or no more useful data is found.
Your model just got trained with a tuned training data without your involvement.