So, it looks like VAIPR’s first iteration is complete!
At this point, the project succeeds (with the help of CMUSphinx) in listening and parsing speech continuously, and responding (as well as possibly displaying) responses to the queries on a screen — as well as speaking a verbal response.
The project actually ended up being relatively simple (with the use of cmusphinx), so I didn’t have to worry about any signal processing, but there were some complications when designing the backend and thinking how I wanted the multi-threaded program to behave and how others should interface with it.
Though there are some large changes in the works (I’d ultimately like for it to just be a distributed web server set up, ideally) — It’s purely functional part (the part that does the listening and processes what they mean) is functional, and that’s something I’m proud of. There are a lot of changes to be made, but for now, it seems like the project will be on hiatus for a while.
After doing extensive work with Flask, and witnessing how immensely simple it is to deploy a 100% portable webserver with any application, I’m going to move in that direction and hook the back-end up to a web server. I think this will make things easier in terms of displaying output and connecting with many potential agents (listening devices/display devices)
At some point, I’d also love to open source this project (so that development can speed up and others can work on cool functionality like facebook integration, etc) — but until I can work up a super easy plugin structure (and clean up my code), it will be closed source.