It was in January of 2018 when I launched TechCoffee, as an experiment both for the Alexa Flash Briefings (daily news that you can pull up on Alexa) and to myself, to practice some skills on Adobe Audition. TechCoffe, in short, is a daily tech news briefing under 2 minutes, available at 5am on the Alexa platform.

Today it marks the 600th episode of TechCoffee and 2 years have passed, with a lot of learning and changes:

  • First I did Tech Coffee with my own voice, everyday. On a couple episodes I had the help of friends, but I learnt that it requires the same voice to build up audience
  • That meant a lot of takes, trying to understand the best way to position voice and to restart from a bad take
  • I had to buy the proper gear and microphones, especially to reduce echo on bad locations
  • Also I had to learn how to adapt, use and maximize the voice effect for the recordings
  • Which meant learning how to do multiband compressing, single band compressor and voice volume leveler, but also noise reduction and parametric equaliser. But most of all, how to use Match Loudness so that everything would be perfect

I even did a video of the backstage:

But time changes and things get perfected and polished. One of the first issues was to deliver the same voice consistency and end results, everyday, from someone that it isn’t an English native speaker and that at the end of long days is tired and the voice is even worse.

So, the 2020 version of TechCoffee came with Natural Text To Speech and it created the necessary consistency on the voice, a more English sound accent and above all, less effort from my side recording it (that meant 0 takes on my side!). The platform adopted for NTTS was IBM Watson and it’s wonderful voices. Yes I tested all others (Microsoft, Amazon, Google) and, yes, IBM delivers the best result.

This also meant a new learning curve, specially Voice Transformation SSML so that daily news could be properly delivered, with a natural pace, more reassuring voice and with the right amount of translation for those technical words. As an example, took some months for the speech-synthesis capabilities to speak “Covid19” or “CoronaVirus”.

All is then processed by scripts (bash and ffmpeg) and pushed to a WordPress post, which then is transformed as RSS feed into the Alexa platform. I use most of the time automation, but some episodes need tweaking and Audition is there to help.

But today I decided to do a spin and published episode 600 on LinkedIn. That meant also learning something new, specially Adobe After Effects to create the waveform for the mp3. Funny enough, it also made me wonder why LinkedIn doesn’t allow mp3 (or podcasts) on the platform.

What I’m trying to say, I think, is that TechCoffee has been more fun for me than anything else. Despite the initial work, the changes and ‘modernisation’ of the platform and content, I still find new stuff to learn. That’s what keeps TechCoffee with 600 episodes, for more than 2,5 years.

Facebooktwitterredditpinterestlinkedinmail