Core technology for building sound based interfaces
SIRI opened up our minds to the use of sound based user interfaces.
Some IIT Kanpur students have developed the code infrastructure necessary
to build the core sound characterisation technology on PC laptops,
that would be neccesary to build sound based user interfaces.
Of course, this core technology development can easily to adapted to regional Indian languages.
If people are interested, we can post a zip of that codebase here.
Maybe putting it up on Github might be nicer? Also, what language is it in? :D
High time someone built up something like Jarvis or Friday.
It's in C.
This is building just the front end part of Jarvis
and it will take a long time.
Probably worthwhile for students unless someone really wants to take a risk.
That must be really extensive, all that C code..
No, it's not.
This is just the scaffolding you'll need
to build the actual sound characterization algorithms.
There's a platform library called ASIO
that allows you to get at the sound buffers on your sound hardware.
This code builds a windows app that uses ASIO
to access and process the sound buffers real-time
and use that processing to drive
some dynamic visual output on the screen.
To use it, you would design sound characterization algorithms
for a specific sound context
then implement them in C
and pop them into the right place in the source code.
Then make whatever visual UI changes you wanted
depending on what you're trying to do.
Sound output is equally easy,
so getting it to talk back to you, like SIRI, would be possible,
but you would have to develop the sound synthesis algos too.
We will be putting it up on Github soon,
You will need Visual Studio to build and run the code.
A free version is available for
individual developers, open source projects, academic research, education,
and small professional teams:
I am the nodal point of contact for this project.
Ah, interesting. I think I know him.