-
Notifications
You must be signed in to change notification settings - Fork 23
Create new datasets from formatting pipelines #1566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
902638e to
e8128d0
Compare
a7b60ea to
85f8a26
Compare
Desktop only. Sets up some scaffolding for doing the same with transcoding pipelines in the future.
c648fa5 to
03a3985
Compare
BryonLewis
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some unused props in JobConfigFilterTranscodeDialog.vue.
Using the Constant instead of ['filter', 'transcode] in a location.
Double job.on('exit', () => ....) calls.
There is a suggestion about refactoring and simplification for the job.on('exit') and that isn't a hard requirement for this PR, just a possible suggestion to simplify some of the logic at the end of the runPipeline function.
There may be another thing that Matt wants and that is support for the system when running a filter to import new Annotations as well. This would require when you find out it is a filter job or a transcode job that it will copy over the existing annotations or if the pipeline creates new annotations it would copy that over.
| /> | ||
| <JobConfigFilterTranscodeDialog | ||
| :value="menuState === 'configuring'" | ||
| :dataset-name="'foo'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably want to pass a variable in here?
| datasetName: { | ||
| type: String, | ||
| required: true, | ||
| }, | ||
| pipelineName: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this even used?
| }); | ||
| }); | ||
|
|
||
| if (['filter', 'transcode'].includes(runPipelineArgs.pipeline.type)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you use pipelineCreatesDatasetMarkers that you already imported. It looks like it is the same array
| sendToRenderer('filter-complete', conversionJobArgs.meta); | ||
| } | ||
| }; | ||
| job.on('exit', async (code) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this technically make it so there are two job exits?
I.E on Line 216 there is the base job exit updating functions?
Update: I did test it and it is calling both of the functions.
| if (!transcodedFilename) { | ||
| console.error('Could not determine name of output video file.'); | ||
| } | ||
| entries.forEach((entry: string) => { | ||
| if (entry.startsWith(`${pipeline.name}_${datasetId}`)) { | ||
| transcodedFilename = npath.join(outputDir, entry); | ||
| } | ||
| }); | ||
| await importNewMedia(transcodedFilename, datasetName, code); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
transcodedFilename is a variable in an upper scope at line 155 I think.
since youre using scoping for access upper variables in this I think you should be able to use transcodedFilename and shouldn't need to check the directory for the entries?
| }); | ||
|
|
||
| if (['filter', 'transcode'].includes(runPipelineArgs.pipeline.type)) { | ||
| const importNewMedia = async (sourceName: string, datasetName: string, code: number) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a requirement but may be an idea:
This may be related to some other comments but I'm wondering if this should be made it's own independent function and just pass in all arguments that are needed and not relying as much on scoping for access?
Along with this you could place this inside of the first job.on('exit') and simplify some of the logic into functions like
ingest_pipeline_data_files()
if (pipeline.pipe.toLowerCase().includes('calibrate_cameras')) {
save_calibration_file()
}
if (pipelineCreatesDatasetMarkers.includes(runPipelineArgs.pipeline.type)) {
process_created_datasets(DesktopJob, outputDatasetName, transcodedFile)
}
Then inside of the process_created_dataset() you can check to see if the new file needs transcoding
|
One issue is that when track annotations are generated alongside images/videos, the annotations correspond to the new images not the original sequence. I've asked claude to fix this on a local branch, and it has on commit: This commit uses the pipeline prefixes to determine which pipelines produce image or video output. Which is one way to do it, though I'm worried not the best. Maybe DIVE can auto-detect when images are produced, or as a fallback pipelines could have some specifier in their headers that indicates this? Alternatively this commit could probably be taken as-is, though I'm worried I'll put a pipeline in Utilities or something that produces image outputs at some point. |
Fix #1453
Changes
Pipelines of the type
filterandtranscodenow save their output to a new directory under VIAME_DATA. This data is imported after the pipelines run.If running pipelines in bulk, a default name is given to the datasets that are created. If running one of these pipelines from a single dataset from the main data view, a new modal window prompts the user to name the new dataset themselves.
A new function
sendToRendererhas been created to send messages from the main process to render processes. It is used as part of this set of changes to tell the renderers to refresh the available datasets, meaning that the newly created datasets are shown to the user right after they are available.Sometimes after import it is clear that a dataset needs to be converted to a web-friendly format. This conversion happens in the same job as the original pipeline after the new data is ingested.
Testing
Test filter and transcode pipelines from both the bulk pipeline menu and dataset view pipeline selector. Ensure that new datasets are created with the expected names, and the resultant data is visible.