Re: [yoshimi-user] Another update

  • From: Will Godfrey <willgodfrey@xxxxxxxxxxxxxxx>
  • To: yoshimi-user@xxxxxxxxxxxxxxxxxxxxx
  • Date: Thu, 29 May 2014 18:02:41 +0100

On Tue, 27 May 2014 22:00:10 +0200
Kristian Amlie <kristian@xxxxxxxxxx> wrote:


NRPNs are not as well supported by hardware devices as other CC
messages. Given the choice I'd focus on either MIDI learn, or trying to
support as much as we can with regular CC messages.

That is, unless NRPN support is an easy one we can grab from Zyn?

I quite agree. It's a low priority. Also NRPNs are rather clunky to uses even
from a sequencer - and you can only control one at a time. The feature I'm most
interested in is MIDI learn as it's the most flexible.

I'm still struggling with the code at the moment. Things are getting clearer
but rather s-l-o-w-l-y :(

In the new_midi branch I've done some further optimisations.

One thing that is a bit odd is that in midi.event we are passing around a
time as well as the 4 byte data we're actually interested in, but nowhere is
this time figure used. From a MIDI point of view yoshi (and zyn) are 'instant
response'.

Doing a lot of reading and head-scratching, this makes sense. We are *using*
the data not creating it or passing it on to any other application. While in
ALSA it's arguable that the timing could be used for latency alignment, JACK is
already sample aligned (which also has the interesting possibility that we
don't need the ring buffer for processing, only for GUI operations).

Anyway enough waffle. Action next :)

--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.


Other related posts: