You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the GRIS, we want to use audio descriptors to generate trajectories. Based on the "describe" example, I can analyze the audio signal in real time using JUCE and the FluCoMa library (tip of main).
But I'm looking to detect events (onset detection) with precision on the scale of audio samples, and I'm not sure which class to use, nor how to use it (fluid::algorithm::OnsetDetectionFunctions or fluid::algorithm::OnsetSegmentation).
The results I get tell me whether or not an event has been detected in the analyzed window. But is it possible to detect precisely at which audio sample the event occurs in the analyzed window?
Thanks !
The text was updated successfully, but these errors were encountered:
second, onset detection is a spectral process, so you will get the usual problem. Sample accurate is very signal dependant too. I recommend checking the discourse.flucoma.org and talking to Rodrigo Constanzo about this - he is very opened and generous. For percussive signal, I helped optimise something incredibly tight. But it doesn't generalise.
So your problem is complex for a machine. a simple tactic would be to have 'presets' for various type of signal, on the following pipeline:
get the frame in which the onset is via something like onsetslice
within that frame, run an amplitude-based peak finder to 'hone in'
size of windows and envelop followers can be customised by the user or by you as presets.
(you could run HPSS first, or Sines, to separate noise and pitch)
there is also the super interesting noveltyslice which is another paradigm for your step 1.
I hope this helps. I would DEFINITELY prototype this in the CCE of your choice (max/pd/sc) to noodle about, then code in C whatever pipeline you decide to go with.
At the GRIS, we want to use audio descriptors to generate trajectories. Based on the "describe" example, I can analyze the audio signal in real time using JUCE and the FluCoMa library (tip of main).
But I'm looking to detect events (onset detection) with precision on the scale of audio samples, and I'm not sure which class to use, nor how to use it (fluid::algorithm::OnsetDetectionFunctions or fluid::algorithm::OnsetSegmentation).
The results I get tell me whether or not an event has been detected in the analyzed window. But is it possible to detect precisely at which audio sample the event occurs in the analyzed window?
Thanks !
The text was updated successfully, but these errors were encountered: