So what I’m saying here, isn’t that Auto-Tune is good if you crank the settings to the opposite direction and make someone sound as flat as a board. No, that would be a disaster to anyone within ear shot.
Something I thought over yesterday afternoon made me evaluate the whole context in which music listeners, artists, industry professionals and music purists look at synthetically altered voices and other phenomena from the likes of companies like Antares Audio Technologies and Vocaloid. Yesterday afternoon I attended a panel at the combined New York Comic-Con/New York Anime Fest at NYC’s Javits Center. The panel was titled Hatsune Miku: Beyond the Character” and ASCII magazine writer Toshihiro Fukuoka and famed Hatsune Miku video producer Masataka P spoke about the overall development process that led to the creation of Hatsune Miku. They also showed and commented on past, present and “to be released” song samples, video clips and Miku designs, to show what the computerized “star” has grown from.
To give you an idea, even though I don’t have the panel footage, for the original voice, before Hatsune Miku the character was even a concept, the vocaloid program took sounds and little ‘jingles’ used for various departing and arriving Japanese trains and converted the sounds to sung pitch with words for the vocaloid voice to sing instead. It would take the little melodic jungles and sing things like “The train is coming! The train is coming!” or “Hurry, the train is leaving!” These were basic but charming and functional purposes.
Once the character image came into the picture, (quite literally) things just took off and the sky became the limit for these guys and the people of the Crypton Future Media corporation, which is the company that developed the version of vocaloid used when Miku’s character was being made. They played around with ideas, sounds and song styles, showing us clips of Hatsune Miku singing not just pop songs, but “Miku can sing Opera! Miku can sing classical! Miku can even sing violin music!” I laughed when the rest of the panel attendees “awww’ed” at the lack of adequate time to play the clip of Miku singing Pachelbel’s Canon. Of course with that reaction they just kept trying to show us everything because the applause reaction from everyone in the room was too hard to resist. For that clip in particular, they even said that all four of the characters (meaning both the visual picture and the varied “voices” were all distinct but still all Miku, meaning all created from the MIKU software. Fukuoka-san called it “MIKU A capella.” (I find something humorous in that, seeing as how a capella means all voice, no instruments but technological instruments were used to make these voices. hah)
So in essence, they can make many personas with technically unique voices, but they are all from the same program. Just for laughs, here’s a clip very similar to the one I saw/heard. And just as a small piece of trivia, back with these earlier clips they just used drawings and had minimal animation added, which was prior to the 3D image and video component.
(Warning, although this is entertaining, some may find it annoying after about 20 seconds.)
Hatsune Miku Performs Pachelbel’s Canon
Learning all about the progression of Hatsune Miku’s visual development was interesting as well. Hatsune Miku and any other Vocaloid developed character is created and programmed to perform dance moves and sequences through a related but separate program called MikuMikuDance. This program is a piece of freeware available to the public and allows not only for “official” Hatsune Miku releases but for fans of Miku to go to town and make all kinds of their own music videos to be combined with covers of songs they make using the MIKU software.
I was lucky enough to see the very first playback of a ‘take zero’ to a new video that was being sold on DVD for the first time at the convention, as well as hear an officially created version of Hatsune Miku programmed to sing a cover of Lady Gaga’s “Poker Face,” both of which are below.
(Not the same exact video I heard yesterday, but pretty darn close. You get the idea.)
Hatsune Miku Performs Lady Gaga’s “Poker Face”
Masataka P’s New Video: Hatsune Miku feat. ATOLS – Eden
All this fuss and hype and praise and it’s a completely fabricated form of entertainment. My original thought process and fascination over this panel just continues from my last bout of intrigue wherein I compared the dreams and realities of technology talked about by Disney, AKB48 and Hatsune Miku HERE. I think the reason, besides the unique outfits, cool character design and odd sex appeal of a fake female pop-star, that I don’t find such avid appreciation annoying or disappointing as I often do with praise of Auto-Tune use, is because unlike the idea of stripping away authentic, natural voices from real people, the people with Vocaloid make no qualms about marketing Hatsune Miku, her other pop star counterparts and their entire product line as artificial from the start. The voice that eventually gave rise to a digital star started out as coding to have singing jingles for trains and prior to that was software coding, which always boils down, as its core, to zeroes and ones. In this way, with an ever evolving program and brand like Hatsune Miku, you see a type of growth and development. Sure, a cover of any pop or classical melody is nothing revolutionary, but it’s all relative.
When the first robotic arm was successfully installed and used, people marveled at the potential for support and efficiency it could bring to the field of product assembly. Think about it though. I can use my arm, pick up an object and move it somewhere else. It’s not the action that is cause for celebration, its the end point the inventors reached after starting from nothing. With the MIKU Vocaloid software, MikuMikuDance and Hatsune Miku herself, the same idea applies. No one is claiming natural perfection or a “voice to rock generations to come,” so there’s no cause for alarm and aggravation about loss of authenticity or exaggeration of appeal. It was several ideas, grown into a multi-tiered brand, marketing platform and product line from the core start of zeroes and ones. That’s what makes it impressive and interesting and intriguing.
However, with Auto-Tune and the rise of:
1) Artists recording notes that are clearly out of their physical range,
2) Digitized melismas and
3) Straight up robotic belting,
true human beings who have the ability to be trained to use an inherent ability to articulate and communicate in a musical way are not displaying addition and evolution of training but rather a subtraction from what a computer program and a bunch of numbers never had from the beginning: Vocal chords and the intelligent capacity to form speech and coherent, complex thoughts to convey said speech.
This is why you can ultimately end up clapping and cheering over a fake singing girl with long teal hair and (at times) a singing range that makes her sound like a chipmunk, while in a world that also contains singing voices like that of Andrea Bocelli.