A team from Waseda University and KDDI labs are demoing a system that generates slideshows of photos from flickr from song lyrics. The images are chosen automatically for each line of the lyrics, relying in part on assigning a "general impression" for each line chosen from a small dictionary of concepts such as season of the year, weather and time of day. I'm not sure how generally applicable this system is, but the example slideshow I saw was suprisingly poetic, like the serendipitously beautiful sequences you sometimes see on the Last.fm visual radio player (which only uses artist images).
Scott Miller presented an action-packed poster on a GeoShuffle iphone app which, amongst other things, recommends music based on geo location information. In a nice experiment GeoShuffle was given to a number of users for three weeks. During that time it created playlists for them, randomly switching between several different playlist generation methods every hour or so. Meanwhile their skip rate was recorded, to see which playlist generation algorithm they liked the best. A method that chose similar songs to those listened to while travelling on the same path was by far the best. Conclusion: humans are creatures of habit!
No comments:
Post a Comment