Archive for the ‘images’ Category
- In: Dazzle | images | Jaded | New York Times | The | To | TV
- Leave a Comment
Modern tech life teems with longstanding quandaries, questions that never seem to go away. Mac or Windows? Turn off the computer every night or let it sleep? Plasma or L.C.D.?
Fortunately, that last question will soon have an answer. There’s a new TV on the block, and its picture is so amazing, it makes plasma and L.C.D. look like cave drawings.
It’s called organic light emitting diode, or O.L.E.D. This technology has been happily lighting up the screens of certain cellphone and music-player models for a couple of years now, but Sony is the first company to offer it in a TV screen. It’s called the XEL-1, and it’s available only from SonyStyle stores. Its picture is so incredible, Sony should include a jaw cushion.
At a cooperative Best Buy store, I did a little test. I set the XEL-1 up next to state-of-the-art plasmas and L.C.D. sets — all hooked up to the same video signal for easy comparison — and recorded the reactions of shoppers and employees. Their adjectives for this picture included “astonishing,” “astounding,” “incredible” (twice) and “amazing” (five times).
They were right. The XEL-1’s picture is so colorful, vibrant, rich, lifelike and high in contrast, you catch your breath. It’s like looking out a window. With the glass missing.
Name a drawback of plasma or L.C.D. — motion blur, uneven lighting across the panel, blacks that aren’t quite black, whites that aren’t quite white, limited viewing angle, color that isn’t quite true, brightness that washes out in bright rooms, screen-door effect up close — and this TV overcomes it.
Plasma is supposed to offer darker blacks than L.C.D., but O.L.E.D. trumps both of them. Next to this TV, even the blacks on the critically adored Pioneer Kuro plasma screen look very dark gray. Blacks on Sony’s O.L.E.D. TV are jet black. Absolute black. Black-hole black — and kuro even means black in Japanese.
(If you’re a TV-technology geek and you’re getting a distinct feeling of déjà vu, congratulations. All of this does sound exactly like the descriptions of S.E.D. television prototypes demonstrated years ago by Toshiba and Canon. Unfortunately, that equally impressive picture technology never made it out of the lab.)
To make this thing even more drool-worthy, the XEL-1’s screen is only three millimeters thick — shirt-cardboard thick. If they could build a laptop with a screen this thin, it would make the MacBook Air look like a suitcase.
- In: DailyTech | Devise | for | Google | images | Researchers | VisualRank
- Leave a Comment
Google creates PageRank for images
When Google introduced its PageRank algorithm long ago it allowed web searchers to have a metric they could look at and easily determine the authority of a webpage. Google researchers are now saying they have developed technology to do for images what PageRank did for web pages.
The New York Times reports that a pair of Google scientists presented a paper called “PageRank for Product Image Search” at the International World Wide Web Conference in Beijing. The software technology is being called VisualRank and is at its core an algorithm that blends techniques for recognizing images and technology for weighting images and ranking them based on what looks the most similar.
Google already has an image search engine that is widely held to be one of the largest image databases online. The current image database pulls images based on clues from text associated with each image. This for instance is why you might get an image of President George W. Bush if you did an image search for Republican.
What the paper the Google researchers presented proposes is a method to actually rank images based on things in the image. Technology has been in place to recognize faces in images for a while, but identifying other things by computer in an image that humans can identify at a glance like a car or mountain has lagged.
Google researchers Shumeet Baluja and Yushi Jing told the New York Times, “We wanted to incorporate all of the stuff that is happening in computer vision and put it in a Web framework.”
- In: Crunchbase | images | Need A Job | photos | TagCow | TechCrunch | Upload
- Leave a Comment
New service TagCow caused a bit of a stir over the weekend. The product seemingly solves the problem of auto-categorization and tagging of photos, something that seems to still be beyond the processing power and software skills of most startups.
Users upload photos – thousands of them if they like – and within a few minutes the photos are returned with stunningly accurate descriptive keywords that facilitate searching and browsing later on. The product worked so well, and the site had so little description of the technology behind it, that I speculated that humans were doing the work in the background.
And….I was right. A reader sent in a tip that they saw the service on Amazon’s Mechanical Turk service, which is a web service that gets people to do things that are fairly hard for computers to do. TagCow is actually a perfect fit for Mechanical Turk.
Users are paid 4 cents to properly tag a group of five photos. I tagged a few photos with “TechCrunch” twenty times each, collected my 4 cents, and moved on. My guess is it would take about two minutes to properly tag the five photos. That means if you work steady and without breaks, you can make $1.20/hour. More if you are speedier.
Website: | www.tagcow.com |
Founded: | March, 2008 |
TagCow, launched in March 2008, is a service that tags your photos with descriptive keywords. If there’s a mountain in the photo, it’s tagged. A dog? yep. A yellow cup? Absolutely. It does people, too. Upload an image of a person and say who it… Learn More
Recent Comments