As artists and communicators, we produce our creations using the current tools at hand. What we can create is largely dependent on those tools, but it’s also true that the tools at hand have an effect on the process of creating the work, the how it’s done aspect. Here’s an example of how a simple magazine cover illustration was produced in the “old school” days of analog photography.
For years I produced the photo-illustrations for the cover of a trade magazine called “Word of DQ.” This was distributed to the thousands of Dairy Queen franchise owners throughout the country and was produced by the DQ home office in Minneapolis. Each month’s issue had a lead story that had to be highlighted on the cover with an appropriate and eye-catching photo illustration.
These magazine covers were good examples of effective low-budget photo production. Each month, DQ’s art director and I had to come up with a visual idea that would represent that month’s lead story. The budget was tight—I would have perhaps a half day of billable time to bring the idea to life. It proved to be superb training in the art of visualizing ideas with economy and efficiency.
The theme was “Flipping the Switch,” a metaphor for the kickoff of a big new marketing program. The art director wanted to illustrate the concept by picturing some sort of switch being engaged, with special effects to depict sparks, lightning, or something of the sort. Of course, this being almost 20 years ago, none of this could be accomplished using digital means. Special effects back then had to be created in the studio.
Today, photo-illustrations often begin with someone scouring the internet for images. Sometimes, a stock photo image is found that conveys the idea well enough, and that’s the end of the process. Back then, image ideas came out of discussion, sketches, and maybe a trip to the local antique store.
At this stage of the process, lots of ideas got kicked back and forth. The art director wanted to have a visually-interesting switch for the illustration, and one of us (I forget who) came up with the idea of finding one of those old “steampunk-style” blade switches. You know, the type you’d find in Dr. Frankenstein’s laboratory. I had actually managed to round up one of these from a collector.
Further discussion nixed that idea, with the visual emphasis of the concept moving from the switch itself to the idea of the energy produced (fire and sparks and all that). Thus it was decided that a regular light switch would be used (painted red for added visual impact) and that the sparks and background should have a contrasting blue color.
This was hastily sketched out and faxed to me. Here it is:
I’m sure this drawing didn’t take my client more than a minute to make. All art directors of that era were expected to have rapid drawing skills. Ideas had to be presented and “sold” to superiors and clients, and making a drawing of the proposed layout was the only way to do this before the Powerpoint age. Such comprehensive layouts could be finely detailed (and almost indistinguishable from the final product) or very fast and rough, like this one.
This came to my studio via fax machine, another technological relic. It was sized so that I could retrace the image onto clear plastic—this template could then be fastened to the ground glass viewing screen of my 4×5 view camera to better help me in composing the shot to precisely fit the layout of the publication.
Before the day of the shoot, I also did some prep work by spray-painting the household switch and visiting the local magic shop, where I purchased a few packs of “4th of July” sparklers.
I had created spark effects before using these incendiary devices. You can get some really interesting images by opening the camera’s shutter in a totally dark studio and “drawing” shapes in the air with the lit fireworks. I also knew that they produced lots of acrid smoke and can cause nasty burns if mishandled. Getting the right photographic exposure of the sparks and determining the right shapes to draw in the darkness would require some experimentation and testing with Polaroid instant film.
And so we began shooting Polaroids. Setting up the basic lighting for the light switch and hand was easy—this was lit with electronic studio flash lighting. Once we had this perfected, we wrote down the pertinent power settings and F-stop information.
The next tests took place in total darkness. I put a blue filter over the camera’s lens to get the color we wanted. Then, the camera shutter was opened and the art director lit the sparkler and “drew” shapes in the darkness radiating outward from behind the light switch. It took several tests using different F-stops to get this sorted out.
Finally, a composite test was done on Polaroid. The camera shutter was closed and the electronic flash lighting readied for the shot of the switch and hand. After this first shot was made, the film was left in place in the camera and the lighting turned off. The blue filter was put in place, and the shutter carefully re-cocked and set to “T” (time exposure). This had to be done without moving the camera, as this would ruin the registration of the two images. Finally, the sparkler motion work was done in the darkness. After this, the shutter was closed and the lights came back on. Finally, the Polaroid film was run through its 60-second process. Here’s the result, taped on a piece of scrap paper with my original tech notes:
Satisfied with the tests, we could then go on to expose some real 4×5 color transparency film. This two-step exposure process had to be done for every precious sheet, and although we had a good Polaroid test, we wouldn’t know if we actually had success until the film was processed the next day. Fortunately, it worked out rather well.
Today, such an image could be easily composited in PhotoShop from a couple of inexpensive stock images. Heck, something quite similar to this is probably available as a ready-to-go composite that can be had for a nominal fee. There’s so much of this sort of imagery for sale on the internet these days that an art director hardly has to leave his or her desk to get the images needed.
This is convenient, but it’s a bit of a shame. Things are easy and fast today, but I’m not sure they’re more creative. “Old School” meant experimentation, struggle, mishaps, improvisation, and sometimes failure. You learned valuable things in the process and ended up with a solution that was, if not quite as slick as today’s digital wonders, a hard-won victory. What you got for your smoke-filled studio and first-degree burns was an image that nobody else had, a totally custom solution to a visual challenge. For me, that was the reward.
Filed under: Uncategorized | 1 Comment
Tags: analog photography, commercial photography, old-school photography, photo-illustration, Photoshop, Polaroid testing, special effects
For people interested in technology (and tech bloggers like myself), the technology sector of the business world provides an unending drama to observe and comment on. It’s the ultimate high-stakes soap opera: once-mighty companies fall to obscurity, revolutionary start-ups create instant billionaires, and companies continually jockey to maintain and strengthen their positions to stay with the thundering herd.
The people who run these companies are also endlessly fascinating. Think Larry Ellison (Oracle) and his America’s Cup sailing team. Or Marissa Mayer (Yahoo) and her photo shoot for Vogue magazine. Or (of course) the late Steve Jobs, a man so eccentric and egotistic that he was said to generate his own “reality distortion field.”
These factors converged recently when Microsoft purchased Nokia’s mobile phone business for 7.2 billion dollars. All the ingredients for a great story were there: Two mighty firms, both playing defense in the wake of recent technology trends, combining forces to compete in a new tech landscape. Rumors of conspiracy and intrigue. And of course, the reality distortion field that is the current state of executive compensation these days.
For perspective, it helps to remember the way things were 10 years ago. Microsoft’s dominant position in personal computing was overwhelming and the concept of “mobile computing” was nothing more than lugging a laptop around. Nokia was in a similar position with mobile phones, selling handsets by the millions.
These two worlds were brought together with the introduction of Apple’s iPhone in 2007. Two pivotal things happened—the phone became a full-fledged computer (not just an enhanced email, texting and paging device), and smartphones became mass-market devices rather than specialty items for business executives.
Apple’s competitors worked hard to meet the iPhone’s challenge, with Google producing the most successful rival. Its Android mobile operating system (OS) is number one worldwide in smartphones today, largely because it’s free software and multiple handset manufacturers have adopted it.
Nokia was making smartphones running its Symbian operating system at this time. This mobile OS, while never as popular in the U.S. as it was in Europe, was a credible contender and even outdid Apple on several fronts (video phone calling was happening on Nokia handsets before Apple’s Facetime came along).
Things changed in 2010, when Stephen Elop, then head of Microsoft’s Business Division, become Nokia’s president. One of the first things Elop did was pen the now-famous “burning platform memo”, outlining his opinion that Nokia’s dependence on Symbian was like being at sea on a burning oil platform and having to decide whether to stay and die or jump off into the unknown. It should be pointed out that at this time, Symbian was the world’s leading smartphone operating system.
Under Elop’s lead, Nokia changed course, adopting the then-new Windows Phone 7 mobile OS from Microsoft. Conspiracy theorists blogged that Elop was Microsoft’s secret agent, a trojan horse planted by Redmond to ensure that their new and relatively untested mobile OS would have at least one major handset maker dedicated to it.
Results were mixed. Nokia’s new smartphones were generally well-regarded, but Windows Phone was mostly competing with the fast-fading Blackberry for third place in a two-horse race. In the meantime, Nokia laid off thousands of employees and watched its stock value plummet.
And now the Microsoft buyout, and with it, Elop’s return to Microsoft with bonuses and compensation amounting to 25 million dollars. The conspiracy theorists are now howling that they were right all along, that the whole thing was a clever plot by Microsoft to get their man in, drive the stock price down, and scoop up the company on the cheap. Whatever the truth, the deal puts Microsoft in the same league as Apple and Google (with their purchase of Motorola Mobility). All have a mobile OS, all can manufacturer their own devices, and all have a portfolio of hardware patents to threaten each other with. Blackberry has all this too, but as the number four smartphone, their future looks pretty grim as they are about to be purchased by a private equity firm for even less money than Nokia.
Elop’s massive bonus (and the furor that followed) brings us back to the idea of a reality distortion field, along with some basic questions about what a (highly paid) chief executive officer of a company is actually supposed to be doing to earn that fat paycheck.
If you ask Nokia’s shareholders, they won’t call this a success. Their stock lost over 80% of its value during Elop’s tenure. If the primary duty of a CEO is to increase shareholder value, this has been a failure.
If you ask any of the thousands of Nokia employees who lost their jobs during this fiasco, they’d probably give Elop a failing grade too, adding that Nokia couldn’t have possible done worse had it adopted Android for its handsets.
So, in the reality distortion field that is the realm of the world’s top executives, one should ask, “who was Stephen Elop working for?” To me, it doesn’t seem like he was working for Nokia’s shareholders. Out-of-work Nokia employees certainly don’t think he was working for them. The conspiracy theorists think he was working for Microsoft all along.
I think he was working for himself.
And he seems to have done pretty well from that standpoint.
Now, there are smart people out there who have written that Nokia’s fate would have been the same even if Elop had never become its president and/or they would have switched to Android or some other successor to Symbian. True enough, but the situation certainly highlights the absurdity of executive compensation in the tech sector, where we’ve seen a string of failed CEOs leave their companies with sacks of cash. I guess it’s just a world where things stop making sense.
Now, much wealthier and back in Redmond, Elop is being considered as one of the front-runners to succeed Michael Ballmer as Microsoft’s next president. I think Microsoft’s board of directors and shareholders might want to carefully think about this before making any decisions. Pundits have already declared that Nokia has jumped from one burning platform to another, larger burning platform. And if Elop becomes president and does to Microsoft’s stock price what he did to Nokia’s, you might want to check your IRA or 401K, for you could be the one getting burned.
Filed under: Uncategorized | Leave a Comment
Tags: bonus, executive compensation, executive pay, Microsoft, mobile phones, Nokia, reality distortion field, Stephen Elop
For over a decade, photographers have benefited from a “megapixel war” among the various manufacturers of digital cameras. As technology progressed, companies such as Canon and Nikon crammed ever-more sensor elements into their imaging chips.
Generally more megapixels are better then fewer. With more pixels, an image can be printed at a larger size or the user can crop it and still have enough pixels to produce a quality picture. However, there is a point where this “arms race” ventures into the realm of absurdity.
Here’s one example. At HMML, we have a small Canon point-and-shoot camera that has a 14-megapixel sensor. The actual size of the images produced is 4320 by 3240 pixels, impressive by any account for such a small camera.
Doing the math, however, illuminates some of the absurdities inherent in this. The imaging chip in this small camera measures a mere 6.17 by 4.55 millimeters in size. To get 14 million sensor elements in this array means that there are over 700 individual sensor elements per millimeter! Does anybody think that the lens on this point-and-shoot can actually effectively resolve images to this level? Not a chance—this is an example of what I call “more pixels, but not much more detail.” At some point, there are diminishing returns, with the only concrete result being the extra data that the user needs to archive.
But it makes for good marketing hype.
Now this marketing effort is coming to the realm of television in the form of “4K video,” also known as Ultra High Definition (UHD).
To understand UHD, it’s good to review the various television resolutions available today. For most of the history of TV, we’ve had “Standard Definition” (SD) video. The actual pixel dimensions of SD video vary a bit depending on the delivery system used, but it’s often referred to as 480p—that is, 480 pixels in the vertical screen dimension with progressive scanning (each video frame scanned at once and in sequence). This is the sort of video you get on a regular DVD, and it’s important to note that most of the legacy video content out there is encoded at this resolution.
High Definition (HD) video has been around for quite some time now, but there are actually two resolution specifications that qualify for this label. 720P television sets display 1280 by 720 pixels; they are typically the smaller and less expensive TV sets available to buyers. A “full HD” TV set displays 1920 by 1080 pixels (1080p).
Many feature films and television programs are “filmed” with digital video cameras having what is called “2K” resolution; this produces an image measuring 2048 by 1152 pixels. These images can be projected in theaters to enormous size and look gorgeous; of course they also can be slightly downsampled for HD television broadcast with no loss of quality.
And, like the tiny point-and-shoot camera mentioned earlier, I think it’s mostly hype, a technology pushed by the television set manufacturers after their failed efforts to convince consumers that 3-D television sets were the wave of the future.
Paying for Resolution that You Can’t See
The first point I can make about 4K is that for most home television viewers the enormous added resolution of UHD will be imperceptible. Typical viewing distances for television in most homes is around 9 feet (referred to as the Lechner Distance after the television engineer who researched this matter). At this distance, typical human vision cannot distinguish individual pixels in a 46-inch (diagonal screen dimension) 720p TV set. For 1080p, it takes a set larger than 69 inches for a person to distinguish them.
For UHD, with its picture size of 4096 by 2160 pixels, you’ll have to have a 138-inch TV set at nine feet to reach the point where individual pixels can be perceived. That’s eleven and a half feet! A viewer watching a smaller TV set (pretty much everyone) simply won’t be able to see the extra resolution. Since these proposed UHD TV sets will certainly cost more than the HD sets available today, what exactly is the consumer getting for the extra money?
Legacy Video Content Could End Up Looking Worse
At this time, there’s still far more Standard Definition video programming out there than HD. This can be a problem when playing such content on a large 1080p set, as the 480p image has to be upsampled to over twice its original size to fit the higher-definition display. In many cases that I’ve seen, the result can be downright ugly. SD video upsampled to 720p generally looks much better to me—the system doesn’t have to create as many “bogus pixels” to enlarge the image.
What’s going to happen when 480p video is upsampled to 2160 pixels (a factor of four and a half)? I haven’t seen any tests, but this has the potential to be completely unwatchable unless the engineers working on UHD are figuring out some sort of technical workaround. Even full 1080p content will end up being enlarged by a factor of two to run on these new television sets, with the inevitable fuzziness, ragged edges, and other enlargement artifacts.
So, for much of the existing video material available to viewers, buying a 4K TV set could actually end up making things look worse rather than better.
Bandwidth and Storage Nightmares
If you look at the graphic above, you can see that UHD video provides over four times the pixel count of 1080p HD. With standard digital video compression, it appears to require about 3 times the network bandwidth to deliver that video to viewers. For companies such as Netflix, Amazon, Hulu, and others, this means having to store much more digital data for on-demand delivery. For cable TV companies and other Internet Service Providers (ISPs), it means having to build more digital bandwidth to accommodate that data as the market moves inexorably towards more streaming video consumption.
Who’s going to pay for these increased data storage costs and cable infrastructure upgrades? Here’s a hint: It won’t be Netflix, Hulu, Amazon, ComCast, or any of these companies. It will be their customers, folks like you and me. All to deliver pixels that can’t be realistically seen, on expensive new TV sets that might make your existing DVDs look lousy. Who’s the winner here?
Not you and me. However, companies like Sony and Panasonic, and vendors such as Best Buy are going to do their best to cash in on the hype.
Filed under: Uncategorized | 1 Comment
Tags: 2K, 4K, Amazon, cable television, ComCast, HDTV, High Definition, Hulu, hype, megapixels, Netflix, Standard Definition, UHD, Ultra High Definition, viewing distance