This article was featured on the DBJ Instablog on Seeking Alpha.

The idea behind patents makes intuitive sense. If a company spends significant resources on researching a new product, it should be granted temporary exclusive rights to its findings. This prevents rivals from stealing those results and profiting from them without paying any costs. Otherwise, the innovative firm would end up losing money for developing new technology.

Keeping this in mind, the Toyota Motor Corporation caused quite a stir when it announced this past January that it is inviting its competitors to use Toyota’s 5,680 patents on hydrogen fuel cell vehicles for free until 2020. After all, a patent is by definition protective. So what’s going on? Have the execs of the car company lost their minds? No, it turns out that Toyota’s move to share its patents is a gamble, but it’s not irrational. The challenges of hydrogen production and distribution have incentivized the company to give away its patents in order to give hydrogen vehicles the “critical mass” it needs to overcome these problems and enter the mainstream market.

Understanding hydrogen vehicles 

To understand hydrogen vehicles, one must first understand the process of electrolysis. Electrolysis is the application of a direct electric current (DC) to induce an otherwise non-spontaneous chemical reaction. For example, the electrolysis of water separates water into its component elements: hydrogen and oxygen.

Hydrogen vehicles rely on fuel cells to perform reverse electrolysis, taking in hydrogen as fuel and oxygen from the surrounding air to produce electricity. They also produce as exhaust water vapor (pure enough to drink), meaning hydrogen vehicles emit no greenhouse gasses whatsoever. Compare this to the 1.8 billion tons of carbon dioxide gas most vehicles release, which the Environmental Protection Agency reports accounts for 28 percent of all greenhouse gas emissions in the United States (second only to power plants). And while a kilogram of hydrogen has the same chemical energy as a gallon of gas, fuel cells are much more efficient than combustion engines such that, functionally, a kilogram of hydrogen is equal to more than six gallons of gas.

Hydrogen vehicles even have advantages relative to hybrids and electrics. Because fuel cells are small and thin, they can be stacked for vehicles of greater size – unlike hybrids, which are restricted by their heavy and bulky batteries. Hydrogens can go farther between refuels than most electrics can between recharges: Toyota’s Mirai has a range of 300 miles, compared to 265 miles of Tesla’s Model S (by far the top range for electrics). In addition, hydrogen vehicles can be refueled in three to five minutes, whereas even the Tesla superchargers require at least 20 minutes for a full charge. Given all this, it isn’t surprising that many experts and industry leaders see hydrogen vehicles as the future of transportation.

Promising but problematic

 The process of electrolysis has been well understood since the 18th century, so why have hydrogen vehicles entered the market only now, two decades behind hybrid vehicles? For much of that time, hydrogen vehicles have been too expensive to manufacture to be of consumer interest. Hydrogen fuel cells need expensive platinum as a catalyst in order to perform reverse electrolysis fast enough for the vehicle’s operation. Furthermore, hydrogen is highly flammable and, like all gasses, expands with rising temperatures (such as those found under the hood of a moving car). Luckily, technological advances have lessened the amount of platinum required, and safe ways of containing hydrogen have been developed. According to Toyota, the cost of making key components of the vehicles has fallen 95 percent in the past seven years, allowing them to sell the Mirai at $57,000 (less than a Model S) instead of the $100,000 it projected in 2008.

But no matter the cost, a hydrogen vehicle will need hydrogen. And although it is true that hydrogen is the most abundant element on the planet, the overwhelming majority of it is free in the atmosphere. Therefore, hydrogen must be derived from other substances.

The two main ways of producing hydrogen are electrolysis of water and a process called steam reformation, in which natural gas is reacted with high-temperature steam to separate out the hydrogen from the hydrocarbons in the gas. For obvious reasons electrolysis can be ruled out, leaving steam reformation. But since natural gas is a fossil fuel, and the point of hydrogen vehicles is to reduce dependence on fossil fuels and their consequences, steam reformation isn’t preferable either.

And even if this challenge was overcome, there exist little infrastructure for delivering that hydrogen. Gas stations are of course everywhere and the number of charging stations for hybrids and pure electrics continue to increase, but there are virtually no hydrogen fueling stations. As of when this article was writen, fewer than 70 such stations exist in the entire United States –most of which are in California, where the Mirai will begin to sell later this year. Clearly, having a hydrogen car is one thing, but being able to drive it is another.

“Critical mass” solutions

Fortunately, there is work being done on both of these problems. On the hydrogen production front, new technologies such as fermentation, photobiological water splitting, and renewable liquid reforming are being developed. A hydrogen fueling station in Fountain Valley, a suburb of Los Angeles, has already employed one of the newest techniques. The station uses human waste from a nearby wastewater plant as its hydrogen source by adding bacteria to turn waste into carbon dioxide and methane, which is then converted to electricity, heat, and hydrogen.

More recently, scientists at the University of Glasgow published a paper in Science this past September explaining how they created a method based on the electrolysis of water, which produces hydrogen 30 times faster than the current best processes. This method needs much lower currents than traditional electrolysis, making it possible for renewable energy to power the method and thus making the use of hydrogen vehicles completely emission-free. But while these results are promising, they will require a significant amount of capital for further research and implementation testing before they can be commercialized.

As for hydrogen delivery infrastructure, California has invested $200 million to build 100 hydrogen fueling stations, and is willing to invest more in stations if successful. Toyota has also loaned $7.3 million to FirstElement Fuels, Inc., to build 19 stations in California. The company is also working with Air Liquide S.A. to build 12 more stations in New York, New Jersey, Massachusetts, Connecticut, and Rhode Island, where it will begin selling the Mirai next year. Though these prices sound high, they are actually cost-competitive with gas stations on a cost-per-mile basis, since fewer are needed due to the higher efficiency and thus greater range of hydrogen vehicles. As long as the state or private companies are willing to invest in building these stations, the country could conceivably go hydrogen.

But the key words here are “as long as”. If hydrogen vehicles remain a fringe technology, it risks being pushed out of the alternative niche by hybrids and pure electrics, which already have established infrastructures. Furthermore, hybrids and pure electrics lack the problem of energy source production, so they are already favored and more likely to be further developed. Simply put, if hydrogen vehicles don’t gain significant awareness soon, they will be outcompeted by their alternative bedfellows.

This problem explains Toyota’s actions. Although a major player in the auto industry, Toyota understands that its bid in hydrogen vehicles alone is not large enough to draw the critical degree of attention needed. By giving free access to its patents, Toyota effectively lowers other companies’ entry costs by paying for their research in hopes of interesting more automakers, cell part suppliers, and energy companies to enter the market. This would in turn increase the volume of hydrogen vehicles and related support, which could push hydrogens into the spotlight and attract investors and capital to solve the production and infrastructure challenges discussed above. At or near that point, which Toyota judges to be in five years according to the duration of its offer, Toyota will close off access to its patents and begin focusing on its own research and business. Essentially, Toyota believes that the cost of bringing hydrogen vehicles into the spotlight is more than its profits if hydrogens does not become mainstream.

This isn’t the first time Toyota has employed such a strategy. In 1997, Toyota licensed its patents for hybrid technology to Ford, Nissan, and others, who paid for that access. As Toyota hopes will happen again with hydrogen vehicles, this move increased the volume of hybrid activities and steered significant attention. And sure enough, when hybrids entered the mainstream market in the late 2000s and early 2010s, the Toyota sold nearly a quarter million Prius a year, making the Prius the world’s third best-selling car make in 2012. Toyota is hoping for a repeat performance with hydrogens.

Right now, hydrogen vehicles are starting out small. Toyota plans to sell 700 Mirais this year. Hyundai, which is preparing to sell its Tucson, plans to sell just 60, and Honda just entered its final marketing stages. Meanwhile, General Motors, Ford, and Audi are all in the development stage on their own hydrogen vehicles. As Toyota expands the Mirai market to the five states listed above next year and the other automakers make their own market entrances and extensions, time will tell whether Toyota will succeed in pushing hydrogens into the mainstream market with its strategic loss plan.

It hovered over the annual Dartmouth Homecoming Bonfire. “It’s a drone,” my friend explained. A drone? The only drones I’d every really heard of were furtive aircrafts used for reconnaissance missions and surveillance over enemy territory.

But as the Homecoming weekend came to a close and the green “18” finally rubbed off of my chest, I found the incredible video captured by this high-flying piece of technology. I looked further only to find that the drone market, like the footage I just watched, is soaring. With such a sturdy frame and notable flight stability, these drones have the capability to fly with surprising ability; GPS systems reduce and mostly eliminate any flight error of a pilot from the ground.

In response to the booming commercial drone market, companies like GoPro established strong footholds within the industry. And while GoPro tapped substantial profits from major drone producers, the producers themselves have emerged as the ultimate winners. Parrot, for example, a French tech company that specializes in Drone production, marked a 130 percent spike in drone revenue.

So what is it that makes these drones so enticing? Drones offer a glimpse into the future of technology. They produce 3D landscaping for agricultural research that farmers can use for highly accurate aerial data acquisition. Companies like Amazon—specializing in internet-based retail in the United States—hope to utilize drones in the delivery process. Some daring owners even descended their drone into an active volcano, and when the camera melted from the overwhelming heat, the drone, operating through a programmed safety feature, was able to the owner on its own.

While the civilian drone industry is booming, the military drone industry still dwarves it. Currently, civilian drones make up only 11 percent of the drone industry, although analysts expect this percentage to increase to over 14 percent. Though these numbers still seem low, in an aerial drone market expecting to climb to over 98 billion within the next decade, commercial drones still hold an impressive stake (13.72 billion) in earnings.

Although the fiscal state of the drone market is optimistic, there are still obstacles. In recent dealings, the underfunded Federal Aviation Administration (FAA) has significantly handicapped the drone industry. Companies like Amazon forewarned the U.S. government that “[they] will have no choice but to divert even more of our [drone] research and development resources abroad.”

On the FAA website, several rules restrict drone users and ads litter the page explaining that “the Super Bowl is a no drone zone, so leave your drone at home”. Because these drones classify as “model aircrafts”, they fall under a specific set of rules. They must remain “below 400 feet, away from airports…and within sight of the operator.” Additionally, the FAA claims the ability to “take enforcement action against [those who]…endanger the safety of the national airspace system.”

Recently, in fact, a drone crashed into the White House Lawn, violating the FAA rules that restrict flight over Washington D.C. In an interview with CNN, President Obama even remarked that the incident only calls for more restrictive regulations on commercial drones.

And so, with the future of these drones looking unclear, we are left to grapple with two different ends of the spectrum. On one end we see a commercial drone industry with considerable potential in the technological world, and on the other the careful yet considerably limiting FAA. As mentioned by an entrepreneur interviewed by Fortune, “There’s still a lot of uncertainty, but the time for this industry is now.”

This past September, NASA announced two landmark contracts with domestic aerospace firms. The two largest, and arguably greatest innovators in space technology over the past decade, Boeing and SpaceX, walked away with $6.8 billion dollars to finalize their capsules and thruster systems so that they may provide transport to the International Space Station (ISS) for US Astronauts by 2017. However, September’s announcement more than just stepped up NASA’s current space programs, it signaled an unprecedented move to privatize the space industry.

Private US aerospace companies were eager to express their approval of these latest contracts, which vastly expanded earlier NASA initiatives. Earlier on in 2009, NASA launched its Commercial Crew Program (CCP) in an effort to “stimulate the private sector to develop and demonstrate human spaceflight capabilities that could ultimately lead to the availability of commercial human spaceflight services.” Major players such as Boeing, SpaceX, and Sierra Nevada Corporation have been beneficiaries of NASA’s Commercial Crew Program, each receiving funding in the realm of $500 million for the initiative.

The impetus behind this latest move has its roots back in 2011 when the United States government terminated the shuttle program. Since then, US astronauts’ only option for reaching the International Space Station has been to catch a ride with the Russians, an alternative that is far from perfect. The funding cuts handed down from Congress hardly cover the hefty price tag of $71 million a seat that the Russians are garnering. From the very onset, the steep price tag proved to be an issue and spurred NASA to investigate other options. Furthermore, along with these fiscal motivations, growing tension between the United States and Russia, recently exemplified by the animosity over Ukraine, provided growing momentum behind the motivation to return the space industry to US soil.

The decision to hand over space travel to the private sector is nothing short of a clear course change for NASA that many argue is a change in the right direction In these September contracts, Chicago based Boeing and Los Angeles based SpaceX, walked away with a $4.2 billion and $2.6 billion respectively. The funds awarded are earmarked for certification and final development of each company’s respective capsules: Dragon for SpaceX and CST-100 for Boeing. Both Boeing and SpaceX have proven that they can innovate and NASA believes that the competition between the firms will serve to alter the face of spaceflight.

In a recent interview, SpaceX’s CEO, Elon Musk, said that these contracts are absolutely about driving down costs and eliminating the United States dependency on Russia. As a relative newcomer to the field, Musk also pointed out that this contract helped secure a spot for SpaceX as a “key anchor tenant” in NASA’s plans for the future. NASA’s new initiative is a crucial next step for pioneering companies like SpaceX and will allow them to prove themselves as top innovators. The key, Elon Musk believes, is that you’ve got to be committed, especially when you’re competing with the likes of Boeing, who plans on bringing aboard Amazon CEO Jeff Bezos and his company Blue Origin to help with new rocketry. As NASA turns its attention towards deeper space missions to Mars and asteroids, the companies are investing for the long run.

Mr. Musk, along with both NASA and other private aerospace firms, seem to have their focus on both the long term and the big picture. After all, the long game is the nature of the space industry. Nothing happens overnight and nothing is cheap or simple. Planning for years into the future is often required to put up a successful mission, and both SpaceX and Boeing have sought to break new grounds. Both have created capsules that can splash down like conventional capsules, but can be reused, saving a great deal of time and enormous costs that come with having to start from scratch at the end of each mission. SpaceX’s Grasshopper rocket has pulled off some impressive feats in testing, where it has shown that it can effectively launch, soar to great heights, and employ its guiding sensors to re-land on the same pad from which it launched. Each company is deeply committed to inventing innovative space technology that will cut costs and increase the efficiency of leaving our atmosphere, which by the way requires an escape velocity of 36,000 MPH. Talk about a high stake buy-in.

No time since the Apollo age has been more exciting for the space industry. New ideas, fresh faces, and private companies are mixing it up with the old guard at NASA. Beyond the standard cast of characters in the established corporate world, some of the world’s most innovative billionaires have made substantial investments into private spaceflight, earning themselves a spot among the “space cowboys.”  But new faces and bold innovation still need to come to terms with old problems and the inherent risks associated with space travel. The Challenger and Columbia disasters remain part of public consciousness and are a reminder of how wrong a mission can go. Such disasters have put a great deal of pressure on NASA to develop and go unmanned whenever possible. This is especially the case given the aging fleet of refurbished rocket engines and other parts now being used by some companies, a practice SpaceX is openly critical about. Like many, critics fear such stopgap measures will tarnish the privatization process as a whole.

In the past, NASA missions required a great deal of time, energy, and planning that caused long separations between missions. But the goal of privatization is to make this a thing of the past and to make spaceflight more commonplace, less expensive, and more accessible. The development of new rocketry, fueled in part by fierce competition, sets a feverish tone that will catapult the United States back into manned space missions and routine space transport. NASA’s ultimate goal is to reach a point where spaceflight is a possibility for more than just astronauts. In a recent press conference NASA administrator, Charles Bolden, announced the contracts with Boeing and SpaceX bring with them the “promise to give more people in America and around the world the opportunity to experience the wonder and exhilaration of spaceflight.” The private sector has the capability to make this goal a profitable possibility, even with a ticket price well below the going $71 million the competition is charging.

 

An Idaho-based company might just have the solution to the issues that currently exist for solar energy. Solar Roadways, founded in 2006, came up with the ingenious idea to replace all United States paved roadways with durable and versatile hexagonal solar panels. On May 18, 2014, a nonmember of the Solar Roadways Company released a video outlining the benefits from such a system, including easy maintenance, ability to be heated in cold climates, and versatility. “SOLAR FREAKIN’ ROADWAYS!” was heard all around the internet not even a week after the technology’s promotional video was released. That was just the beginning.

Right now, the United States is facing the issue of deciding between maintaining poor infrastructure and updating it. The government, on both national and state levels, has not been able to make a proper call yet. As a result of this indecision, highway associations, and departments of public works across the United States have been making slapdash repairs that don’t last nearly long enough and end up causing more issues in the long term. The American Society for Civil Engineers (ASCE) released a quadrennial report in 2013 grading each sector of America’s infrastructure. Since solar roadways have the potential to affect electrical, bridge, and roadway infrastructure, the main focus in funding changes will be centered on those three aspects. The Federal Highway Administration estimates that $170 billion must be spent annually through 2020 to significantly improve the conditions and performance of roadway infrastructure in the United States, representing an increase of $69 billion over current spending. Additionally, the Federal Highway Administration estimates that $20.5 billion must be spent annually through 2028 to improve overall conditions, an $8 billion over current levels. The ASCE estimates that between distributing energy and transmitting it from generative sources to distribution chains, the United States will spend close to $94 billion annually through 2020, a substantial increase from the current $63 billion. All of these estimates will amount to a grand total of $108 billion in extra funding per year, and is simply estimated to be the amount necessary  for the United States to catch up to, not exceed, current infrastructure standards. Conversely, The Economist in June of 2014, reported that implementing a system to replace the entirety of America’s roadways would cost an estimated $1 trillion. Furthermore, this figure does not take into account the research and development that would be necessary in order to implement solar roadways. Put another way, it is without a doubt that, in terms of principal costs, it would be more convenient and cheaper to maintain the status quo for roads in the United States. If Solar Roadways was simply a paving material, it wouldn’t be the cheaper option to  cure the United States’ infrastructure crisis.

Luckily, Solar Roadways has possibilities that far exceed those of asphalt and concrete. First and foremost, solar roadways would provide a national path towards energy independence. According to 2013 figures from the Energy Information Administration, fossil fuels, the majority of which are imported, make up 67% of the electricity generated in the United States. Constantly functioning solar panels, covering 31,250 square miles of roads, parking lots, driveways, playgrounds, bike paths, and sidewalks in the United States could change those proportions. According to Solar Roadways’ own estimates, their technology spread across the country could produce over three times the electricity that we currently use in the United States every year. Solar roadways would thus not only allow for sustainable energy independence, but would also allow for enough of a cushion to maintain energy independence even in the most drastic of situations.

Secondly, Solar Roadways could help improve energy efficiency in the Unite States. One current large issue with energy production in the United States is that energy grids are removed from energy production, especially when it comes to nonrenewable sources. However, the way solar roadways are designed would allow for the grid to run concurrently with energy production in an efficient way and could help the United States  control overall costs of energy.

Beyond the primary issue of energy, Solar roadways could improve transportation infrastructure in several other ways. Solar roadways could significantly improve highway safety. Current levels of accidents per year on asphalt roadways hover around 6.5 million accidents. Through the use of heated and illuminated panels that are easily replaceable and have storm drains installed, roadways will have increased visibility. Giving drivers more control on roads during rough driving conditions should improve overall highway safety.

Finally, Solar Roadways would have the advantage of easy recyclability. The tiles created by Solar Roadways, from the glass surface to the inner components, are entirely recyclable. In contrast, concrete and asphalt recycling are labor and capital intensive processes that are not easily undertaken by any company or government institution that seeks to repave roads, sidewalks, or the like.

A report from the National Economic Council and the President’s Council of Economic Advisers from July of 2014 illustrates that “a high quality transportation network is vital to a top performing economy.” It has already been established that the United States’ transportation infrastructure is of poor quality and may in fact be a drag on economic growth and productivity. The process introducing solar panel laden roads may also prove to be a prime opportunity for the federal government to implement more stringent quality standards on infrastructure.

Of course there remain many large issues and question marks as to the feasibility of implementing a nationwide conversion to Solar Roadways. The largest issue that arises when switching a major amount of asphalt and concrete production to solar panel production is labor displacement. The asphalt production industry employs somewhere around 300,000 Americans, and the concrete production industry employs close to 170,000 Americans. Solar roadways thus needs to make up for close to 450,000 jobs in manufacturing, engineering, and maintenance if it can be a viable alternative for the United States to accept the technology and continue job growth. Unfortunately, projections on exactly how many manufacturing and engineering jobs Solar Roadways can produce  are unavailable due to a lack of empirical evidence.

Furthermore, solar roadways have only been produced and tested as successful prototypes in a small shop setting in Idaho. They have not been produced on a large scale. Solar Roadways has yet to analyze the impacts that different weather and geographic conditions can possibly have on their product. Making the move to mass production will also be a significant challenge. These specialized solar panels are meticulously crafted on a small scale that has not yet been translated into an industrial level operation. Before solar roadways can be implemented, the company needs to iron out all of the possible issues that can occur during mass production or implementation. While issues that can arise by the end of the prototype phase will be figured out, the fact remains that the technology is not viable in its current situation and cannot be adopted by the United States as is.

Finally, strong political interest groups may also prove to be a stumbling block for the energy infrastructure startup. Established industries such as asphalt and big oil have political clout and are certain to lobby against roadways. Oil is one of the largest industries in America and possessed deeply entrenched political power in Washington, rivaled only by the American Medical Association and the National Rifle Association. Solar Roadways, if it wants to find a solid place in America, will thus have to face and combat considerable political clout.

There is no doubt that Solar Roadways has great prospects as a technology. It will help the world reduce its carbon footprint and dependence on non-renewable sources of energy. Solar Roadways offers important improvements to the current system of highway infrastructure ranging from safety to energy efficiency. While nothing is yet concrete for solar roadways in terms of implementation, the potential for solar roadways, especially in a country with thousands of miles of roads like the United States, is limitless.

The biotech industry has been heating up. As of mid-February this year, the number of IPOs by biotech companies in 2014 has nearly reached 20, representing capital raising efforts of over $1.1 billion. During the first biotech boom era, the year 2000 saw the IPOs of 26 biotech companies, raising $1.9 billion.

This year, the IPO class of biotech companies represents a broad variety of biopharmaceutical endeavors, from gene therapy to protein therapeutics to personalized immunotherapies. As a result of an increased appetite for risk on the part of investors, the high uncertainty of a biotech venture has become easier to stomach.

Amsterdam-based uniQure offers the first, and currently, only approved gene therapy product in the European Union. Gene therapy is a promising new form of disease treatment that targets mutated DNA within a patient’s cells. The firm raised $82 million after issuing 5.4 million shares at $17 per share, 21% higher than the midpoint of its filing range.

As a company with a drug that has already been approved, uniQure is much more likely to succeed than other companies that may be in the earlier stages of developing a drug. In fact, in the drug development business, many early-stage compounds will never make it to market. The most promising compounds, after undergoing rigorous testing to ensure they will be safe in humans, take several years to reach clinical stage. Even then, based on past data for the productivity of pharmaceutical R&D, only 20% of candidates entering the clinical trial phase will receive FDA approval.

That biotech is heating up is also evident with the IPO of Eleven Biotherapeutics, which raised $50 million by pricing its shares at $10. Though the shares priced at 28% below the midpoint of the company’s filing range, Eleven’s stock price rose 8.5% on the first day of trading.

While uniQure’s gene therapy drug, Glybera, treats a rare condition called lipase deficiency (LPLD), Eleven’s lead drug candidate can treat dry eye disease (DED), a disease which 26 million patients in the United States are estimated to have. Thus, investors flocked to the stock despite the company having no approved drug, because the larger target market for Eleven’s potential product buoys the probability of commercial success for the company.

Another cause for the increasing investor interest in the biotech market this year is the success of biotech offerings last year.  In fact, while the number of companies that have offered shares in the public markets for the first time this year has surpassed 20, a number which is quite impressive already, it comes on the heels of a record-breaking 47 biotech IPOs in 2013.

Several companies in the 2013 IPO class that have been successful include Bluebird Bio, Aratana Therapeutics, and Foundation Medicine. Orphan drug development company Bluebird Bio raised $101 million on June 19 last year with a per-share price of $17. Now, Bluebird is 38% above its IPO price. On June 27, 2013, animal-care medicine company Aratana Therapeutics priced at $6, raising $35 million. Today, Aratana trades 243% above its IPO price. Third, personalized cancer therapy company Foundation Medicine sold nearly 6 million shares last September 25 for $18 per share, rising 96% on the first day to close at $35.35 per share. Now, Foundation is 76% above its IPO price.

For many investors, the current atmosphere of optimism in the biotech sector is reminiscent of that from a decade ago, after medical research brought visions of leaping advances in the industry. Specifically, April 2003 saw the completion of the Human Genome Project, an international research collaboration that led to the sequencing of the tens of thousands of genes that make up the human genome. With this level of detail and information, researchers could seek further understanding of human disease. Many biotech companies today are working towards the commercial realization of those past advances in medical research.

This year will be a hot year for the biotech sector with no shortage of companies entering the playing field.

Microsoft has been a leader in the software industry for over thirty years, but its reign as a powerhouse built on the sale of computer code may be coming to an end. The company is far from dead, however, as it builds a new future in the increasingly competitive realm of hardware as evidenced by the Xbox, Surface, and now the acquisition of Nokia’s devices and services. The end of Microsoft as a software giant marks its beginning as a hardware player.

The rise of mobile computing has not been kind to Microsoft. The firm built its empire on a combination of sales of the Windows operating system and its Office productivity suite. While the enterprise market remains attached to Microsoft, the personal computer market in developing countries is saturated and consumers have flocked to phones and tablets as the digital wave of the future. Though initially a competitor with its Windows Mobile offerings in the early 2000s, iPhone and Android have decimated Microsoft’s mobile operating system market share, now down to 3.7% this year.

Microsoft has tried valiantly with its new Windows Phone offerings to stay relevant in the smartphone age, but a modern aesthetic and good feature set are clearly not enough to do to the smartphone industry what Microsoft did to the desktop. The executives in Redmond, Washington at Microsoft’s headquarters may be removed from Silicon Valley, but they are not oblivious.

In September, Microsoft showed it understood its failure and Apple’s success. By announcing the acquisition of Nokia, Microsoft will be able to sell a fully vertically integrated product. Google has recently followed a similar approach with its purchase of Motorola as the vehicle for its Android operating system. By making the software and hardware together, Microsoft can make refinements it could only dream of in the past. To paraphrase Steve Jobs’ philosophy behind Apple’s success, they can make it “just work.”

A Microsoft phone running Microsoft software brings an even greater advantage beyond its fit and finish. People don’t buy software anymore. One can’t purchase the operating system of their choice for their phone as they could with their PC. Even on the PCs that remain, Apple’s transition to free distribution of its operating system and productivity suite makes Microsoft’s sales model seem more archaic by the day. One buys the hardware. That’s where the profit is. In the first quarter of 2012, smartphone industry profits were $14.4 billion, much of it going to Apple. In that same quarter, Microsoft made only a profit of $5.7 billion from the entire company. With growing sales every year and large margins on hardware, smartphones are clearly the place to be.

Hardware sales aren’t new to Microsoft. Since launching the Xbox gaming console in 2001, its Entertainment and Devices Division generates billions in revenue annually. Even when people don’t think of Microsoft as a hardware company, its vertical product approach in video gaming has been successful for over a decade.

The acquisition of Nokia and a future of Microsoft-branded phones was hardly unexpected. Lagging excitement at Microsoft from underperforming Windows Phone sales through third party hardware manufacturers and the bungled launch of Windows 8 combined to push CEO Steve Ballmer into retirement and generate speculation for something new.

In 2010, former Microsoft Executive Stephen Elop was appointed CEO of Nokia. As the first non-Finn to head the company and an employee from the ambitious Microsoft culture, some viewed him as a Trojan horse intent on ultimately making Nokia into a subsidiary of Microsoft. His initial decision to have Nokia, a company that failed in its adaptation to the rise of smartphones, discard its homegrown operating system in favor of Microsoft’s new Windows Phone software was a cause of great concern for industry watchers. Windows Phone was untested and Android seemed to be a much better choice. With the argument that he didn’t want Nokia to become another generic Android phone manufacturer, Elop pushed ahead with Windows Phone development, ultimately culminating in the current Lumia line of Nokia smartphones.

Windows Phone was not Nokia’s savior, however, and it has continued to lose hundreds of millions of dollars quarterly. Its stock price has plummeted and Microsoft has appeared as its savior. Stephen Elop recently said, “I feel sadness because we are changing Nokia and what it stands for. And for all of us…there is ambiguity and concern because it is so hard to know what the future holds. But we have to do the right thing.” It’s unclear if Stephen Elop was in fact a Trojan horse, but his agreement to Microsoft’s offer of acquisition and a new position as an Executive Vice President at his old company does not suggest someone who always had Nokia’s best interests at heart.

With its expansion into direct hardware design, Microsoft is aiming to be the next Apple before smartphone buyer loyalties become too cemented. It’s unclear if it will be successful, as few complained of Windows Phone as an operating system or the third-party hardware it was sold on before. At the same time, Microsoft was a surprise success in the video game industry despite launching its first console decades after competitors. Microsoft is still a giant, and it has the reserves to invest and persevere for some time. Whether it can ever make a successful transition to hardware giant, though, is something only time will tell.

 

Touchless technology is changing how we interact with computers by allowing alternative ways to use existing applications and by providing programmers with the chance to develop new forms of software. Developed by Leap Motion, a San Francisco based technology company, the Leap Motion Controller is one such piece of touchless hardware, and appeals to those interested in exploring these new avenues of technological communication. The device can be purchased from http://www.leapmotion.com for eighty dollars, and as of July 28, is currently selling in Best Buy stores. Lately, the release of the Leap Motion Controller has garnered a large amount of media attention and has received generally mixed reviews. This innovation certainly presents us with new opportunities, but is there a market for this type of product?

The Leap Motion Controller is a small, sleek device that can be paired with a variety of interactive programs. Rick Broida, a journalist for Computerworld, describes the item as “attractive and surprisingly compact,” and with a height of 0.5 inches, width of 1.2 inches, and weight of 0.1 pounds, the controller appears to offer more than its unassuming size would indicate. The product sits below the computer monitor and utilizes near-infrared LEDS and CMOS image sensors to recognize the user’s gestures. After plugging the device into a USB port and installing the necessary software, the controller is ready to function. Users can access the program Airspace to launch applications and purchase additional products from the Airspace store. Some notable applications include Corel Painter Freestyle (a painting program), popular games such as Cut the Rope and Fruit Ninja, Cyber Science (virtual dissection software), and Touchless, a program that enables mouse-free use for the general computer interface.

What is the actual demand for a product such as the Leap Motion Controller, and what is the overall viability of touchless technology in the marketplace? First, the device is relatively cheap and is easy to set up, making the product available and accessible to most categories of consumers. In addition, Leap Motion reviewers consistently comment that the controller performs particularly well in certain niche areas. Michael Steeber from 9 to 5 Mac contends that Leap Motion works well for “natural, interactive gameplay” and MIT Technology Review’s Rachel Metz posits that the technology can improve “computer-aided tasks, like drawing, modeling, and virtual dissections, as well as making it easier to surf the web.” Not only is there potential demand from consumers seeking computer assistance for specialized activities, but also average people can enjoy Leap Motion’s unique function as a gaming device. Overall, it appears that there exists a substantial potential position for touchless appliances in the marketplace.

Leap Motion, however, is not without its flaws. Although the controller worked well with games and other inherently interactive programs, many reviewers found it difficult to use the device for general computer activity, such as browsing the Internet. Ostensibly the controller is able to track the user’s gestures to 1/100th of a millimeter, but reviewers often felt frustrated when it failed to register simple commands. Another issue with the product is that continual use of the controller proves to be a tiring endeavor. Many complain of aches and soreness after holding their arms above the keyboard for an extended period of time.

Although Leap Motion has a few faults, it’s important to understand that the product is currently akin to a prototype and serves as an early example of the potential uses for touchless technology. Additionally, Leap Motion appears poised to enhance the marketplace as computer manufacturers, seeing as Hewlett-Packard plans to integrate this technology in their own products. Looking towards the future, touchless devices seem bound to permeate everyday lives and improve tasks that are unsuitable for the mouse and keyboard.

While the smartphone has taken the world by storm and gained millions of fans and users within the past few years, there is a gadget that promises the same functions – all on your wrist. The battle for customers in the smartphone market will possibly switch to the watch market as many large tech companies like Google and Apple plan to launch their own version of a smartwatch.

So what exactly is a smartwatch? A smartwatch is a watch that connects wirelessly with a smartphone through Bluetooth and takes in wireless information to display news, weather, email, social network notifications, incoming calls, and more. Some smartwatches also offer GPS navigation, fitness tools, gaming, and remote music controls. The smartwatch enables users to receive notifications and control various smartphone functions remotely without having to constantly check their phone.

While they may not have yet popped up on the general public’s technology radar, smartwatches are not new products. They have been around at least as early as 2003 with Microsoft’s SPOT (Smart Personal Objects Technology) watches that delivered information like news and the weather. These watches did not garner much interest due to  high costs and bulkiness, with production ending in 2008 and a #15 ranking on CNET Executive Editor David Carnoy’s “The decade’s 30 biggest tech flops list”. However, there are rumors that Microsoft plans to re-enter the smartwatch market with a new design.

Currently, Sony and Pebble are two of the major players in the smartwatch market. Sony’s second-generation smartwatch, the SmartWatch 2, was released this June and can be used as a regular digital watch, or can be used with Bluetooth and one-touch NFC pairing with Android smartphones to access other features. It has a square color screen where users can view email and text messages, handle incoming calls, and adjust music settings remotely. But while the SmartWatch 2 boasts over 200 apps benefitting from its previous release, it falls short of the Pebble watch in terms of iPhone compatibility.

Pebble Technology, a startup from Palo Alto, launched its smartwatch with a black and white e-paper screen (saving battery life) in 2013. In need of capital, Pebble raised more than $10 million within 40 days from 68,929 backers on crowd funding site Kickstarter, becoming the most highly crowd-funded project at the time. The Pebble watch works with both iPhones and Android devices through Bluetooth to send notifications via silent vibrations. Pebble has arguably one of the most successful smartwatch products on the market, with at least 275,000 orders as of July 11, 2013.

The Pebble lineup of Smartwatches

There has been much fervor in the developing smartwatch market, with the biggest tech companies gearing up for a launch later this year or next year. Apple, Google, and Samsung have all been confirmed to be working on smartwatch products.  Apple reportedly will launch the “iWatch” in 2014, and has already attempted to register “iWatch” as a trademark in many places, such as Taiwan, Russia, Mexico, and Japan. According to Bloomberg, Apple has employed 100 designers and engineers to work on the new iWatch and has filed at least 79 patent applications containing the word “wrist”. Apple’s strong brand image and reputation for sleek and quality products seeks to give a strong boost to the smartwatch market, just like it has with the iPod and the iPhone. Marshall Cohen, an analyst at NPD Group asserts that “Apple can merge fashion with function,” and that a smartwatch from Apple “could triple the size of the watch business in a year or two. They have the opportunity to get everyone that owns a cell phone to go out and buy another watch.” Compatibility with the iPhone and other products is sure to be a plus with consumers as well.

Could this be the new iWatch?

Google has already delved into the wearable technology market with the Google Glass. Motorola, owned by Google, has put out its own version of the smartwatch with the Motorola MotoACTV sport watch, which can be connected with a smartphone but also has some autonomous fitness functions such as GPS exercise tracking and step tracking.

Samsung will reportedly launch a Samsung Galaxy Gear smartwatch, to be announced at the same time as the Galaxy Note 3 in September this year at the IFA (International Franchise Association) conference in Berlin. Google and Samsung will most likely pair their watches to be compatible with Android devices and apps.

Avi Greengart, an analyst at Current Analysis research firm, states “2013 may be the year of the smartwatch” as “components have gotten small enough and cheap enough”. ABI Research, a market research and intelligence firm, projects that more than 1.2 million smartwatches will be shipped in 2013. But will the line of new smartwatches be able to garner more than the modest successes of its predecessors? It is debatable whether the average consumer will believe a smartwatch is necessary when he/she has a smartphone. A smartwatch is more of a smartphone accessory, and a smartphone has access to all the same functions and more with a larger user interface. But on the other hand, having these smartphone functions on a watch could make life easier, especially in situations where it’s more polite to glance down at your wrist versus taking out your phone. Manufacturers will have to tackle the challenge of creating display sizes that are large enough to be useful yet still stylish enough to attract both men and women. Adequate battery size and pricing could be concerns as well.

As for now, Pebble’s smartwatch priced at $150 is available at local Best Buys, with other smartwatches sold mostly online. We can only wait and see whether the upcoming lineup of smartwatches will live up to the hype.

 

Imagine having the ability to create nearly anything in the comfort of your own home. Or at work. Or in a classroom. Or in a hospital.

Additive manufacturing, popularly known as “3D Printing,” has the potential to revolutionize our personal lives. 3D printing is the literal opposite of the traditional manufacturing process, which essentially pares down a large piece of raw material, be it wood, metal, or plastic, into a product we can use. A 3D printer transforms a computer-aided design (CAD)—a virtual blueprint of anything from a toy apple to an entire car part—into a physical product. The printer slices the blueprint into digital cross-sections, and complex mechanisms within the machine translate those cross-sections into layers of liquid or solid material, rendering the final product.

3D printing will irrevocably change the retail industry. It’s a lofty statement, considering the limited usage of the technology today, but consider the implications of future innovations in efficiency and an expanding user base. While 3D printing’s short-run benefits, from sheer convenience to highly customized products, are obvious, its long-run benefits are more powerful. 3D printing will force retailers to innovate and to deliver more value to their customers in order to prevent the retail industry from disappearing entirely. Though there are doubts about 3D printing’s safety and viability as a practical technology, what is undeniable are the ripple effects it will cause in the retail industry and beyond.

For example, 3D printing may do to the toy industry what peer-to-peer file sharing (torrents) did to the entertainment industry. The ability to print the latest action figure for your child’s birthday present could soon replace those last minute scrambles to Toy R’Us. Though most affordable 3D printers today do not have the capacity to make complex objects with intricate parts, this may very well be a possibility in a the near future. The capabilities of such technology gives consumers enormous power and could certainly prove to be a disruptive force within the retail sector, much like torrents were in the music industry. The key distinction here is that the entertainment industry can rely on the “experience” factor to compensate for lost revenue from illegal downloads. Live experiences such as concerts, music festivals, and movie theater viewings continue to be popular with consumers.

However, if 3D printing continues to improve in detail, quality, and affordability, traditional retail outlets like Toys R’Us can offer consumers few advantages. In the near term, producers of homogeneous products such as Barbie Dolls and solid action figures may see their sales decrease as consumers gain the ability to produce these toys at home. Model building kits and toys can be easily reproduced at home, as 3D printers excel at making solid, small parts.

Programmers and designers of 3D printed objects also face similar issues software and media developers face. Pirating would certainly pose problems for developers. One way to tackle this would be to make the designs one-time-use or disposable. After the object is printed once (or perhaps ‘x’ number of times), the schematics for the object automatically delete itself. Another way developers could combat pirating would be to not allow consumers to download or install files in the traditional sense. Instead, following in the footsteps of companies like Blizzard (see Diablo 3), they should require users to stream the file or have an Internet connection in order to regulate usage of designs.

It’s about time existing hardware catch up to software, and 3D printing allows the ‘maker’ community to bring hardware into the twenty-first century. Let’s say a mechanical engineer needs a physical model of a highly specific car engine part. It is unique, intricate, and incredibly difficult to craft by traditional means. Producing the part by traditional means would require expensive molds and machinery that would not pay for itself at all. Basic microeconomic theory dictates that the marginal cost and average fixed cost of a good should decrease (although MC increases after a certain point) the more goods you produce. In this case, the engineer would need to pay an enormous cost for the first, presumably the only, item he or she produces.  In cases like these where economies of scale do not factor in, 3D printing is the superior method of production. Computer software has allowed programmers to create a virtual model of nearly anything.

What is interesting is that there are very few successful open source hardware companies. MakerBot, very much a supporter of the “do it yourself” ethos of the Maker community, was one such company. The popular 3D printing startup raised eyebrows throughout the open source community and industry when an updated model of its “Replicator” printer, released in the fall of 2012, was no longer open sourced. It was clearly a move that aimed to transform its image from hobbyist-startup to a professional company. With its hardware now safe from clones and copycats, it is likely to raise more revenue through greater sales and give investors more confidence in its business model.

Maker enthusiasts and proponents of open source hardware would argue though that MakerBot betrayed the spirit of democratic innovation that it, and arguably 3D printing, once embodied. MakerBot has essentially shifted its “competition” from an altruistic, hobbyist community to profit-driven closed-source companies, which may or may not pose a greater threat to the company. OSHW supporters believe that if through incremental improvements and iterations the community can develop a more effective, cheaper 3D printer, shouldn’t that benefit everyone?

My conjecture would be a conditional “yes.” I ultimately believe in the ingenuity of crowd-sourced thinking in certain instances. Browsers like Google Chrome and Mozilla Firefox have proven the value of an enthusiastic community in furthering software development. Wikipedia has demonstrated the efficiency and accuracy of a dedicated, educated user base. However, the only way MakerBot and companies like it could have engineered such remarkable technologies was through significant financial backing, conditional of course upon profits. Thus it seems that only through a closed-source model, perhaps with certain open source characteristics with respect to printing designs, can 3D printing thrive as a technology.

Though much of our discussion has been focused on the retail sector, 3D printing’s impact need not be limited to retail. The concept of transplanting 3D printing technology into the pharmaceutical and healthcare industry is not far-fetched and would greatly improve medical treatment. Imagine if your dentist told you to download your dentures after an annual checkup. After making a mold of your teeth, scanning in into his computer, he would hand you a “prescription” with the following instructions: “Download. Print.” The convenience of 3D printing encourages prototyping and modeling, which could in turn spur medical breakthroughs. What if printers used organic matter to create cells, tissues, and possibly even entire organs? Doctors could print biotic products, such as skin tissue, tailored towards specific diseases and patients.

While 3D printing is certainly revolutionary it’s important to remind ourselves of fundamental limits of this technology. Consumers will probably never be able to “print” a Nintendo DS at home. Consumers would never print ordinary domestic items such as silverware and furniture. Manufacturers of toys can at least rejoice in the fact that 3D printers cannot and possibly will never be able to produce the complex toys with electronic parts currently sold in toy stores.

Enthusiasts tout 3D printing as the most “democratic” form of manufacturing. This sentiment is rather unfounded. Economies of scale will mean that traditional manufacturing methods will always have an edge over 3D printing in terms of cost and efficiency. It is unrealistic then to hope that someday consumers will be able to have mini-factories at home capable of creating anything at a moment’s notice.  Furthermore, unless there are revolutionary advances in technology, the quality of 3D printed objects will always be inferior to their traditional retail counterparts. The plastic resin and methods used to create these objects, while not flimsy, lack the strength and durability of more commonly used materials.

For such a versatile technology it’s conceivable how it could be used for illegal and even dangerous purposes. Indeed once the technology improves, this fear may prove to be a formidable obstacle to widespread adoption. The designs for 3D-printed guns, for example, can be found online. Such guns function exactly like normal ones and fire standard ammunition. It is even conceivable that one day, ammunition can be printed online, though it may be trickier as bullets require gunpowder. Many sensationalists and doomsayers correctly identify possible abuses of 3D printers, but these concerns do not hold up to close scrutiny.

Many overestimate the danger such abuses can pose. For one thing, ammunition production is still beyond the scope of current 3D printing technology. If owners of a printed gun don’t have access to regular ammunition, their gun is effectively useless. Furthermore, their access to ammo in the first place raises the question of why they would even go to the trouble of purchasing printer, printing materials, and schematics rather buying an actual gun. The image of a gun enthusiast or hardcore criminal going out of their way to employ a high-tech method of production to create a gun, rather than getting one off the black market, is slightly preposterous. In America in particular, if a person really wanted a gun but for some reason did not have access to gun stores, there are far more convenient and cheaper ways of getting one. Furthermore, with relatively simple “how-to” guides for homemade bombs and other weapons available online, is it really reasonable to believe that the mass adoption of 3D printing poses a significant threat?

For entrepreneurs and serial inventors, 3D printing makes prototyping a new product incredibly convenient. By simplifying the production and prototyping stage, the entire product development process and team can be streamlined. It follows then a boost in efficiency should lead to a burst of venture and entrepreneurial activity. The availability of 3D printers for common usage in hotbeds of startup activity such as college campuses will prove to be a boon for the entire industry. Thus while 3D printing is an impressive technology in its own right, its true value lies in its ability to spark other innovations and lead to the breakthroughs of tomorrow.

The latest quarter results for Netflix, a service website that enables its users with access to television shows and movies via online streaming and in the mail, saw its membership increase by 610,000. According to CEO Reed Hastings, this marked the biggest surge in the number of users, a sharp rebound from a debacle last summer when 800,000 members quit the service following an unexpected price hike and Netflix’s decision, which has since been reversed, to separate the company’s streaming and mail-order DVD rental businesses.

In an industry once dominated by firms like Blockbuster and Hollywood Video, Netflix has since taken over the competition by revolutionizing the video rental process. First, it pioneered the movie-by-mail business that put many video rental stores out of business. Instead of having customers travel to the stores, Netflix essentially reversed this relationship by directly reaching out to the customers themselves.

The company also made highlight in making a smooth transition from “old technology” (delivering movies via mail) to “new technology” (having an online database that streams movies and shows online). Since then, Netflix has transformed the way people watch television shows and films. Currently, it boasts over 75 thousand selections for its 24.4 million subscribers in the US, Canada, and even parts of Central America.

Critics of the company are quick to posit that the success of Netflix is an ephemeral phenomenon. They claim that just as Netflix was able to overtake its competitors in a short span of time, it too will be a victim of technological innovations in the near future. All businesses rise and fall. People believed MySpace would last for a long time, given the great level of innovation in networking it brought to the market at the time… until companies like Facebook emerged. Perhaps there will be a time when a new innovative high- flier shuts down Netflix’s business. However, current conditions indicate that it won’t happen for any time soon.

In the midst of heightened public sensitivity over Internet piracy due to H.R.3261 (Stop Online Piracy Act) and S.968 (Protect Intellectual Property Act), Netflix is one of the few that is insulated from the consequences of these legislations. Although Representative Lamar Smith (R-TX) has withdrawn the House bill, the Senate legislation is still up for debate. However, regardless of whether PIPA is passed into law or vetoed, Netflix will be in good standing.

If the bill passes, Netflix would become the most financially viable option to watching shows and movies for many consumers. Besides the computer gurus who might be able to work around the regulations and download from illegal websites or networks, most consumers are left to seek legal alternatives. Having to pay ten dollars for a movie ticket versus paying eight dollars for a monthly Netflix membership fee, the consumers would much rather choose the latter because consumers can access many more shows while paying less per shows that they watch. By having a Netflix account, subscribers can view thousands of television shows and movies provided by the website. Rather than being able to watch the one movie from buying the movie ticket, consumers can watch many more shows for the same price.

Furthermore, having the ability to stream for shows online eliminates the implicit costs of time and money to travel to video stores or the movie theaters. Therefore, consumers who may otherwise not have subscribed to Netflix will have even more incentives to subscribe if online piracy regulations are passed into law.

This may give Netflix a near monopoly power over the industry, which they can leverage to hike up membership fees. However as long as they can adjust their monthly charges at a reasonable rate, it seems hard to doubt the continuity of Netflix’s high- flying success. Even if the bill is not passed, Netflix can still potentially benefit from this result, as some consumers worried about the legitimacy of their pirated contents may end up switching to the legally viewable sites like Netflix.

Additionally, in a high-piracy environment, Netflix can profit from favorable contracts with media companies. As content owners become more disheartened to see their content being shared without them receiving any revenue, they will offer more favorable negotiating terms to Netflix, which can help them monetize beyond box-office tickets, pay-per-view or DVD sales.

Internet piracy will never completely go away, but the ebb and flow is something that Netflix can capitalize on. Perhaps Netflix is the true winner of this ongoing battle over online piracy.