Anyone else following Willow Text-to-Speech/Speech-to-Text development?

I'm guessing once this has been proven to be the real deal, maybe it will get their attention. In the meantime, it looks like the developer is creating their own protocol for a flexible independent workflow.

Our current plan for this is what we're calling the Willow Application Server (WAS) and the WAS protocol. One of the main project goals is to not be linked to Home Assistant for any functionality for universal compatibility with other supported endpoints (openHAB, Home Assistant, generic REST, etc).
WAS will live standalone (like WIS) and take over handling of much of the Home Assistant interaction (connection to HA will be configured there). We will have applications (like OpenAI) that can chain together functionality such as this (or LLM via WIS, API calls to arbitrary services, etc) to enable functionality such as:
Willow (wake) -> WIS ASR -> OpenAI/LLM/API/etc -> HA Action (optionally) -> TTS output of results
We would also love to see the community create a Willow component that integrates WAS protocol support in Home Assistant so that Home Assistant can essentially "fake" a WAS instance and provide the same integration and chaining to the ability that Home Assistant and the component are fundamentally able to.

I wish Home Assistant (and other applications) would support MQTT for native speech/voice recognition, so none of these protocol matters. Personally, I run an older Windows based SAPI5 TTS engine (Neospeech), all accessible via MQTT, so all of my systems can use this service (including Home Assistant using service calls).