On February 20, 2019, South Korean electronics company Samsung announced the Galaxy Fold, the first cellular phone with a fully folding screen by a major cell phone manufacturer. The Fold was originally slated for release in July of 2019, but was delayed after test units sent to reviewers began showing severe issues. After the recall, many journalists described the entire launch as a complete failure and an embarrassment for Samsung. Now, after implementing several key hardware adjustments, the Fold has now been released in many major markets, it has yet to be seen if it will be remembered as a failure or a major step in the future of the industry. Although it may have its flaws and had a severely troubled launch, the Galaxy Fold demonstrates the innovative spirit that defines Samsung’s largest edge over its competition, such as technology giants Apple and the China-based Hauwei.

The Fold’s defining feature is evident in its name. It consists of two major pieces, connected with a hinge. The device has a smaller screen on the outside and a much larger, flexible screen on the inside of the hinge. The outer 4.6” screen is largely used for minor tasks, while the inner screen becomes a 7.3” square when unfolded and is designed to be used for the majority of tasks. Because of the complex nature of the phone, it costs around $2,000, a relatively high price compared to non-folding alternatives. It also has specifications on par with or above the rest of this year’s flagship phones, from its processor to battery. It also has six cameras on the device, one above the smaller screen, three on the back of the phone, and two more in the top right corner of the inner foldable screen. The foldable screen is the main selling feature, with a screen diameter significantly larger than any non-folding phone. The 7.3” screen is comparable to an iPad Mini’s size of 7.9” but fits into the form factor of a regular-sized cell phone.

Many of the major complaints about the Fold have been related to its longevity. When the test units were sent out, they had a few catastrophic issues. The phones shipped with a protective layer over the foldable screen. Many reviewers thought this was a removable protector, but in reality, when it was removed the entirety of the foldable display was destroyed. There was no warning on the earliest models that this was the case. The second major issue was the hinge holding the phone together. While Samsung said it was robust and rated to last “at least 200,000 folds and unfolds,” this did not account for outside dust and sand, which could easily enter the phone through the hinge. This would then result in the particles accumulating under the screen, ultimately leading to bulges and more broken displays. At the time, some journalists said this would be a major and complicated issue to resolve, but Samsung seems to be confident in their fixes to these issues.

Samsung fixed the screen filament by extending the edges under the sides of the phone, so that no user would be tempted or able to peel it off the screen. This has so far proved to be an effective fix, as no further incidents have been reported. As for the hinge, Samsung added a small plastic cover piece to close up the gap in which particles were entering. This has been relatively effective, but as the phone still has minor gaps, this fix has yet to be proven totally effective.

Despite structural issues, the Fold demonstrates Samsung’s overarching mentality of innovation compared to its competitors. The smartphone space is becoming increasingly crowded, and companies are trying to find ways to distinguish themselves from competitors. For Apple, they have such a large and dedicated consumer base, largely because of their “ecosystem” that encourages buying products specifically from Apple. Because of this brand loyalty, they have shifted away from design-focused endeavors towards services such as the upcoming Apple TV+. Samsung, on the other hand, has realized that in order to draw in new consumers, it needs to differentiate itself and provide exciting new hardware features. Another of their previous screen-based innovations was the “hole punch display” which allowed for a camera sized hole to be cut out of the screen in order to give the screen more real estate on the phone.

More broadly, Samsung has 36 research and development centers across the world devoted to securing new innovations. Additionally, the company invested $12 billion into R&D in 2015 alone. Samsung has been the second-largest patent holder in the United States since 2006, even though it is not based in the U.S. Finally, Samsung also has its own Strategy and Innovation center, which is a division of the company specifically devoted to designing future products and solutions.

Because of innovations such as these and the Galaxy Fold, Samsung’s market share of smartphones has increased from 18 percent in Q4 2018, to 22 percent in Q2 2019. In the long run, this innovative strategy will help Samsung continue to attract more of the market. Although the Fold has its minor issues and a relatively high price, it demonstrates the companies continued efforts to innovate in the space and draw an increasing percentage of the market share.

Artificial intelligence (AI) has already found widespread application in the world of business. The automation of repetitive and dangerous tasks as well as language processing in automated calls and chatbots have changed the game for many industries. AI’s integration in business applications is clearly a trend that will grow in the future. Playing a key role in this future is generative adversarial networks (GANs). Despite being relatively recently developed, its continuing advancements will put it on the forefront of revolutionizing marketing.

While the technology behind GANs is complex, the concept is quite simple. A GAN is comprised of two neural networks (a neural network is a form of AI that is good at recognizing patterns). One neural network is called the generator. The other neural network is called the discriminator. Both neural networks are trained with the same data set, but have different tasks. The generator creates new content based on the similarities it finds in the data set. The discriminator, on the other hand, compares the new content against the data set to see if it is good enough to pass off as real. This constant battle between the neural networks is what makes generative adversarial networks ‘adversarial.’

Not only are GANs dynamic, but they are also adaptable. Once trained, they do not stay static. The generator continually refines its creations to try to increase the portion that can slip past the discriminator. The discriminator, conversely, improves on its methods to catch even more of the generator’s creations. GANs operate like a continual cat-and-mouse game between a counterfeiter and a cop. Furthermore, new, raw data could be added to the original data set to make the GANs generate and discriminate new information. This adaptability means that GANs never have to become out-of-date.

Specifically, GANs will play a critical role in marketing. Even the greatest products and services can fall flat with poor marketing. Businesses can employ GANs as a form of test marketing to predict and improve how consumers respond to new products, helping them overcome the challenge of marketing and put their products/services under the best light possible.

There are multiple ways that GANs can help with marketing. For one, businesses could employ them to improve product recommendations in online marketplaces. In January of 2018, three researchers at Amazon India Machine Learning trained the ecommerceGAN (ecGAN) to do just this. Theoretically, a customer could buy any combination of products from an online marketplace in the same purchase order. The ecGAN explores the space of all possible orders a customer could make and determines which orders a customer would actually make. The generator creates possible order combinations while the discriminator decides if that order is realistic. The original training data set of the GANs can also be updated to include new, arising trends of the digital marketplace, allowing GANs to recommend products that both today’s and tomorrow’s customers will actually want to buy together. This new model tightens the accuracy of product recommendations on websites.

In addition to being involved in advertising, GANs can also help with finding information about sales. The same three researchers at Amazon India Machine Learning developed the ecommerce-conditional-GAN (ec2GAN) to predict the demographics of the customer, selling price, and the general dates for sales. As it turns out, the ec2GAN could make accurate predictions with its generated scenarios that lined up with the real-world data. For example, products characterized as men’s shorts were predicted to be sold 63.10% percent of time during summer months. Real-world men’s shorts were purchased 67.52 percent of the time during summer months. When used across customer demographics, prices, and seasons, the ec2GAN can provide invaluable information for retailers.

Knowing what is bought together, when, for how much, and by who is a good starting place for marketing information; still, it is important to know how the product listing could be changed to influence sales. In an increasingly digital marketplace, product descriptions play a crucial role in selling the product. Two researchers from Stanford trained a GAN using information from Airbnb listings in Manhattan, New York between January 1, 2016 and January 1st, 2017. In total, 40,000 listings were used to train their GAN. Supposedly, if a listing had a better description to increase its appeal, then it would have a higher occupancy rate. The hypothesis proved false. The description did not have as large of an impact on occupancy rate as other key variables like location, home type, and amenities. The exception, however, was keyword packing. Listings that used a lot of key search terms became more visible on the website, and thus had higher occupancy rates. GAN technology was able to identify this significant pattern for businesses to take advantage of.

GAN technology can be applied to gather useful marketing information to aid businesses. Despite having been just coded in 2014, the technology has made great improvements in the past five years. As researchers continue to refine different types of GANs and look for more applications, it is clear that GANs are here to stay and they will only find a larger place in the business world in the years to come.

This article was featured on the DBJ Instablog on Seeking Alpha.

The idea behind patents makes intuitive sense. If a company spends significant resources on researching a new product, it should be granted temporary exclusive rights to its findings. This prevents rivals from stealing those results and profiting from them without paying any costs. Otherwise, the innovative firm would end up losing money for developing new technology.

Keeping this in mind, the Toyota Motor Corporation caused quite a stir when it announced this past January that it is inviting its competitors to use Toyota’s 5,680 patents on hydrogen fuel cell vehicles for free until 2020. After all, a patent is by definition protective. So what’s going on? Have the execs of the car company lost their minds? No, it turns out that Toyota’s move to share its patents is a gamble, but it’s not irrational. The challenges of hydrogen production and distribution have incentivized the company to give away its patents in order to give hydrogen vehicles the “critical mass” it needs to overcome these problems and enter the mainstream market.

Understanding hydrogen vehicles 

To understand hydrogen vehicles, one must first understand the process of electrolysis. Electrolysis is the application of a direct electric current (DC) to induce an otherwise non-spontaneous chemical reaction. For example, the electrolysis of water separates water into its component elements: hydrogen and oxygen.

Hydrogen vehicles rely on fuel cells to perform reverse electrolysis, taking in hydrogen as fuel and oxygen from the surrounding air to produce electricity. They also produce as exhaust water vapor (pure enough to drink), meaning hydrogen vehicles emit no greenhouse gasses whatsoever. Compare this to the 1.8 billion tons of carbon dioxide gas most vehicles release, which the Environmental Protection Agency reports accounts for 28 percent of all greenhouse gas emissions in the United States (second only to power plants). And while a kilogram of hydrogen has the same chemical energy as a gallon of gas, fuel cells are much more efficient than combustion engines such that, functionally, a kilogram of hydrogen is equal to more than six gallons of gas.

Hydrogen vehicles even have advantages relative to hybrids and electrics. Because fuel cells are small and thin, they can be stacked for vehicles of greater size – unlike hybrids, which are restricted by their heavy and bulky batteries. Hydrogens can go farther between refuels than most electrics can between recharges: Toyota’s Mirai has a range of 300 miles, compared to 265 miles of Tesla’s Model S (by far the top range for electrics). In addition, hydrogen vehicles can be refueled in three to five minutes, whereas even the Tesla superchargers require at least 20 minutes for a full charge. Given all this, it isn’t surprising that many experts and industry leaders see hydrogen vehicles as the future of transportation.

Promising but problematic

 The process of electrolysis has been well understood since the 18th century, so why have hydrogen vehicles entered the market only now, two decades behind hybrid vehicles? For much of that time, hydrogen vehicles have been too expensive to manufacture to be of consumer interest. Hydrogen fuel cells need expensive platinum as a catalyst in order to perform reverse electrolysis fast enough for the vehicle’s operation. Furthermore, hydrogen is highly flammable and, like all gasses, expands with rising temperatures (such as those found under the hood of a moving car). Luckily, technological advances have lessened the amount of platinum required, and safe ways of containing hydrogen have been developed. According to Toyota, the cost of making key components of the vehicles has fallen 95 percent in the past seven years, allowing them to sell the Mirai at $57,000 (less than a Model S) instead of the $100,000 it projected in 2008.

But no matter the cost, a hydrogen vehicle will need hydrogen. And although it is true that hydrogen is the most abundant element on the planet, the overwhelming majority of it is free in the atmosphere. Therefore, hydrogen must be derived from other substances.

The two main ways of producing hydrogen are electrolysis of water and a process called steam reformation, in which natural gas is reacted with high-temperature steam to separate out the hydrogen from the hydrocarbons in the gas. For obvious reasons electrolysis can be ruled out, leaving steam reformation. But since natural gas is a fossil fuel, and the point of hydrogen vehicles is to reduce dependence on fossil fuels and their consequences, steam reformation isn’t preferable either.

And even if this challenge was overcome, there exist little infrastructure for delivering that hydrogen. Gas stations are of course everywhere and the number of charging stations for hybrids and pure electrics continue to increase, but there are virtually no hydrogen fueling stations. As of when this article was writen, fewer than 70 such stations exist in the entire United States –most of which are in California, where the Mirai will begin to sell later this year. Clearly, having a hydrogen car is one thing, but being able to drive it is another.

“Critical mass” solutions

Fortunately, there is work being done on both of these problems. On the hydrogen production front, new technologies such as fermentation, photobiological water splitting, and renewable liquid reforming are being developed. A hydrogen fueling station in Fountain Valley, a suburb of Los Angeles, has already employed one of the newest techniques. The station uses human waste from a nearby wastewater plant as its hydrogen source by adding bacteria to turn waste into carbon dioxide and methane, which is then converted to electricity, heat, and hydrogen.

More recently, scientists at the University of Glasgow published a paper in Science this past September explaining how they created a method based on the electrolysis of water, which produces hydrogen 30 times faster than the current best processes. This method needs much lower currents than traditional electrolysis, making it possible for renewable energy to power the method and thus making the use of hydrogen vehicles completely emission-free. But while these results are promising, they will require a significant amount of capital for further research and implementation testing before they can be commercialized.

As for hydrogen delivery infrastructure, California has invested $200 million to build 100 hydrogen fueling stations, and is willing to invest more in stations if successful. Toyota has also loaned $7.3 million to FirstElement Fuels, Inc., to build 19 stations in California. The company is also working with Air Liquide S.A. to build 12 more stations in New York, New Jersey, Massachusetts, Connecticut, and Rhode Island, where it will begin selling the Mirai next year. Though these prices sound high, they are actually cost-competitive with gas stations on a cost-per-mile basis, since fewer are needed due to the higher efficiency and thus greater range of hydrogen vehicles. As long as the state or private companies are willing to invest in building these stations, the country could conceivably go hydrogen.

But the key words here are “as long as”. If hydrogen vehicles remain a fringe technology, it risks being pushed out of the alternative niche by hybrids and pure electrics, which already have established infrastructures. Furthermore, hybrids and pure electrics lack the problem of energy source production, so they are already favored and more likely to be further developed. Simply put, if hydrogen vehicles don’t gain significant awareness soon, they will be outcompeted by their alternative bedfellows.

This problem explains Toyota’s actions. Although a major player in the auto industry, Toyota understands that its bid in hydrogen vehicles alone is not large enough to draw the critical degree of attention needed. By giving free access to its patents, Toyota effectively lowers other companies’ entry costs by paying for their research in hopes of interesting more automakers, cell part suppliers, and energy companies to enter the market. This would in turn increase the volume of hydrogen vehicles and related support, which could push hydrogens into the spotlight and attract investors and capital to solve the production and infrastructure challenges discussed above. At or near that point, which Toyota judges to be in five years according to the duration of its offer, Toyota will close off access to its patents and begin focusing on its own research and business. Essentially, Toyota believes that the cost of bringing hydrogen vehicles into the spotlight is more than its profits if hydrogens does not become mainstream.

This isn’t the first time Toyota has employed such a strategy. In 1997, Toyota licensed its patents for hybrid technology to Ford, Nissan, and others, who paid for that access. As Toyota hopes will happen again with hydrogen vehicles, this move increased the volume of hybrid activities and steered significant attention. And sure enough, when hybrids entered the mainstream market in the late 2000s and early 2010s, the Toyota sold nearly a quarter million Prius a year, making the Prius the world’s third best-selling car make in 2012. Toyota is hoping for a repeat performance with hydrogens.

Right now, hydrogen vehicles are starting out small. Toyota plans to sell 700 Mirais this year. Hyundai, which is preparing to sell its Tucson, plans to sell just 60, and Honda just entered its final marketing stages. Meanwhile, General Motors, Ford, and Audi are all in the development stage on their own hydrogen vehicles. As Toyota expands the Mirai market to the five states listed above next year and the other automakers make their own market entrances and extensions, time will tell whether Toyota will succeed in pushing hydrogens into the mainstream market with its strategic loss plan.

It hovered over the annual Dartmouth Homecoming Bonfire. “It’s a drone,” my friend explained. A drone? The only drones I’d every really heard of were furtive aircrafts used for reconnaissance missions and surveillance over enemy territory.

But as the Homecoming weekend came to a close and the green “18” finally rubbed off of my chest, I found the incredible video captured by this high-flying piece of technology. I looked further only to find that the drone market, like the footage I just watched, is soaring. With such a sturdy frame and notable flight stability, these drones have the capability to fly with surprising ability; GPS systems reduce and mostly eliminate any flight error of a pilot from the ground.

In response to the booming commercial drone market, companies like GoPro established strong footholds within the industry. And while GoPro tapped substantial profits from major drone producers, the producers themselves have emerged as the ultimate winners. Parrot, for example, a French tech company that specializes in Drone production, marked a 130 percent spike in drone revenue.

So what is it that makes these drones so enticing? Drones offer a glimpse into the future of technology. They produce 3D landscaping for agricultural research that farmers can use for highly accurate aerial data acquisition. Companies like Amazon—specializing in internet-based retail in the United States—hope to utilize drones in the delivery process. Some daring owners even descended their drone into an active volcano, and when the camera melted from the overwhelming heat, the drone, operating through a programmed safety feature, was able to the owner on its own.

While the civilian drone industry is booming, the military drone industry still dwarves it. Currently, civilian drones make up only 11 percent of the drone industry, although analysts expect this percentage to increase to over 14 percent. Though these numbers still seem low, in an aerial drone market expecting to climb to over 98 billion within the next decade, commercial drones still hold an impressive stake (13.72 billion) in earnings.

Although the fiscal state of the drone market is optimistic, there are still obstacles. In recent dealings, the underfunded Federal Aviation Administration (FAA) has significantly handicapped the drone industry. Companies like Amazon forewarned the U.S. government that “[they] will have no choice but to divert even more of our [drone] research and development resources abroad.”

On the FAA website, several rules restrict drone users and ads litter the page explaining that “the Super Bowl is a no drone zone, so leave your drone at home”. Because these drones classify as “model aircrafts”, they fall under a specific set of rules. They must remain “below 400 feet, away from airports…and within sight of the operator.” Additionally, the FAA claims the ability to “take enforcement action against [those who]…endanger the safety of the national airspace system.”

Recently, in fact, a drone crashed into the White House Lawn, violating the FAA rules that restrict flight over Washington D.C. In an interview with CNN, President Obama even remarked that the incident only calls for more restrictive regulations on commercial drones.

And so, with the future of these drones looking unclear, we are left to grapple with two different ends of the spectrum. On one end we see a commercial drone industry with considerable potential in the technological world, and on the other the careful yet considerably limiting FAA. As mentioned by an entrepreneur interviewed by Fortune, “There’s still a lot of uncertainty, but the time for this industry is now.”

This past September, NASA announced two landmark contracts with domestic aerospace firms. The two largest, and arguably greatest innovators in space technology over the past decade, Boeing and SpaceX, walked away with $6.8 billion dollars to finalize their capsules and thruster systems so that they may provide transport to the International Space Station (ISS) for US Astronauts by 2017. However, September’s announcement more than just stepped up NASA’s current space programs, it signaled an unprecedented move to privatize the space industry.

Private US aerospace companies were eager to express their approval of these latest contracts, which vastly expanded earlier NASA initiatives. Earlier on in 2009, NASA launched its Commercial Crew Program (CCP) in an effort to “stimulate the private sector to develop and demonstrate human spaceflight capabilities that could ultimately lead to the availability of commercial human spaceflight services.” Major players such as Boeing, SpaceX, and Sierra Nevada Corporation have been beneficiaries of NASA’s Commercial Crew Program, each receiving funding in the realm of $500 million for the initiative.

The impetus behind this latest move has its roots back in 2011 when the United States government terminated the shuttle program. Since then, US astronauts’ only option for reaching the International Space Station has been to catch a ride with the Russians, an alternative that is far from perfect. The funding cuts handed down from Congress hardly cover the hefty price tag of $71 million a seat that the Russians are garnering. From the very onset, the steep price tag proved to be an issue and spurred NASA to investigate other options. Furthermore, along with these fiscal motivations, growing tension between the United States and Russia, recently exemplified by the animosity over Ukraine, provided growing momentum behind the motivation to return the space industry to US soil.

The decision to hand over space travel to the private sector is nothing short of a clear course change for NASA that many argue is a change in the right direction In these September contracts, Chicago based Boeing and Los Angeles based SpaceX, walked away with a $4.2 billion and $2.6 billion respectively. The funds awarded are earmarked for certification and final development of each company’s respective capsules: Dragon for SpaceX and CST-100 for Boeing. Both Boeing and SpaceX have proven that they can innovate and NASA believes that the competition between the firms will serve to alter the face of spaceflight.

In a recent interview, SpaceX’s CEO, Elon Musk, said that these contracts are absolutely about driving down costs and eliminating the United States dependency on Russia. As a relative newcomer to the field, Musk also pointed out that this contract helped secure a spot for SpaceX as a “key anchor tenant” in NASA’s plans for the future. NASA’s new initiative is a crucial next step for pioneering companies like SpaceX and will allow them to prove themselves as top innovators. The key, Elon Musk believes, is that you’ve got to be committed, especially when you’re competing with the likes of Boeing, who plans on bringing aboard Amazon CEO Jeff Bezos and his company Blue Origin to help with new rocketry. As NASA turns its attention towards deeper space missions to Mars and asteroids, the companies are investing for the long run.

Mr. Musk, along with both NASA and other private aerospace firms, seem to have their focus on both the long term and the big picture. After all, the long game is the nature of the space industry. Nothing happens overnight and nothing is cheap or simple. Planning for years into the future is often required to put up a successful mission, and both SpaceX and Boeing have sought to break new grounds. Both have created capsules that can splash down like conventional capsules, but can be reused, saving a great deal of time and enormous costs that come with having to start from scratch at the end of each mission. SpaceX’s Grasshopper rocket has pulled off some impressive feats in testing, where it has shown that it can effectively launch, soar to great heights, and employ its guiding sensors to re-land on the same pad from which it launched. Each company is deeply committed to inventing innovative space technology that will cut costs and increase the efficiency of leaving our atmosphere, which by the way requires an escape velocity of 36,000 MPH. Talk about a high stake buy-in.

No time since the Apollo age has been more exciting for the space industry. New ideas, fresh faces, and private companies are mixing it up with the old guard at NASA. Beyond the standard cast of characters in the established corporate world, some of the world’s most innovative billionaires have made substantial investments into private spaceflight, earning themselves a spot among the “space cowboys.”  But new faces and bold innovation still need to come to terms with old problems and the inherent risks associated with space travel. The Challenger and Columbia disasters remain part of public consciousness and are a reminder of how wrong a mission can go. Such disasters have put a great deal of pressure on NASA to develop and go unmanned whenever possible. This is especially the case given the aging fleet of refurbished rocket engines and other parts now being used by some companies, a practice SpaceX is openly critical about. Like many, critics fear such stopgap measures will tarnish the privatization process as a whole.

In the past, NASA missions required a great deal of time, energy, and planning that caused long separations between missions. But the goal of privatization is to make this a thing of the past and to make spaceflight more commonplace, less expensive, and more accessible. The development of new rocketry, fueled in part by fierce competition, sets a feverish tone that will catapult the United States back into manned space missions and routine space transport. NASA’s ultimate goal is to reach a point where spaceflight is a possibility for more than just astronauts. In a recent press conference NASA administrator, Charles Bolden, announced the contracts with Boeing and SpaceX bring with them the “promise to give more people in America and around the world the opportunity to experience the wonder and exhilaration of spaceflight.” The private sector has the capability to make this goal a profitable possibility, even with a ticket price well below the going $71 million the competition is charging.


An Idaho-based company might just have the solution to the issues that currently exist for solar energy. Solar Roadways, founded in 2006, came up with the ingenious idea to replace all United States paved roadways with durable and versatile hexagonal solar panels. On May 18, 2014, a nonmember of the Solar Roadways Company released a video outlining the benefits from such a system, including easy maintenance, ability to be heated in cold climates, and versatility. “SOLAR FREAKIN’ ROADWAYS!” was heard all around the internet not even a week after the technology’s promotional video was released. That was just the beginning.

Right now, the United States is facing the issue of deciding between maintaining poor infrastructure and updating it. The government, on both national and state levels, has not been able to make a proper call yet. As a result of this indecision, highway associations, and departments of public works across the United States have been making slapdash repairs that don’t last nearly long enough and end up causing more issues in the long term. The American Society for Civil Engineers (ASCE) released a quadrennial report in 2013 grading each sector of America’s infrastructure. Since solar roadways have the potential to affect electrical, bridge, and roadway infrastructure, the main focus in funding changes will be centered on those three aspects. The Federal Highway Administration estimates that $170 billion must be spent annually through 2020 to significantly improve the conditions and performance of roadway infrastructure in the United States, representing an increase of $69 billion over current spending. Additionally, the Federal Highway Administration estimates that $20.5 billion must be spent annually through 2028 to improve overall conditions, an $8 billion over current levels. The ASCE estimates that between distributing energy and transmitting it from generative sources to distribution chains, the United States will spend close to $94 billion annually through 2020, a substantial increase from the current $63 billion. All of these estimates will amount to a grand total of $108 billion in extra funding per year, and is simply estimated to be the amount necessary  for the United States to catch up to, not exceed, current infrastructure standards. Conversely, The Economist in June of 2014, reported that implementing a system to replace the entirety of America’s roadways would cost an estimated $1 trillion. Furthermore, this figure does not take into account the research and development that would be necessary in order to implement solar roadways. Put another way, it is without a doubt that, in terms of principal costs, it would be more convenient and cheaper to maintain the status quo for roads in the United States. If Solar Roadways was simply a paving material, it wouldn’t be the cheaper option to  cure the United States’ infrastructure crisis.

Luckily, Solar Roadways has possibilities that far exceed those of asphalt and concrete. First and foremost, solar roadways would provide a national path towards energy independence. According to 2013 figures from the Energy Information Administration, fossil fuels, the majority of which are imported, make up 67% of the electricity generated in the United States. Constantly functioning solar panels, covering 31,250 square miles of roads, parking lots, driveways, playgrounds, bike paths, and sidewalks in the United States could change those proportions. According to Solar Roadways’ own estimates, their technology spread across the country could produce over three times the electricity that we currently use in the United States every year. Solar roadways would thus not only allow for sustainable energy independence, but would also allow for enough of a cushion to maintain energy independence even in the most drastic of situations.

Secondly, Solar Roadways could help improve energy efficiency in the Unite States. One current large issue with energy production in the United States is that energy grids are removed from energy production, especially when it comes to nonrenewable sources. However, the way solar roadways are designed would allow for the grid to run concurrently with energy production in an efficient way and could help the United States  control overall costs of energy.

Beyond the primary issue of energy, Solar roadways could improve transportation infrastructure in several other ways. Solar roadways could significantly improve highway safety. Current levels of accidents per year on asphalt roadways hover around 6.5 million accidents. Through the use of heated and illuminated panels that are easily replaceable and have storm drains installed, roadways will have increased visibility. Giving drivers more control on roads during rough driving conditions should improve overall highway safety.

Finally, Solar Roadways would have the advantage of easy recyclability. The tiles created by Solar Roadways, from the glass surface to the inner components, are entirely recyclable. In contrast, concrete and asphalt recycling are labor and capital intensive processes that are not easily undertaken by any company or government institution that seeks to repave roads, sidewalks, or the like.

A report from the National Economic Council and the President’s Council of Economic Advisers from July of 2014 illustrates that “a high quality transportation network is vital to a top performing economy.” It has already been established that the United States’ transportation infrastructure is of poor quality and may in fact be a drag on economic growth and productivity. The process introducing solar panel laden roads may also prove to be a prime opportunity for the federal government to implement more stringent quality standards on infrastructure.

Of course there remain many large issues and question marks as to the feasibility of implementing a nationwide conversion to Solar Roadways. The largest issue that arises when switching a major amount of asphalt and concrete production to solar panel production is labor displacement. The asphalt production industry employs somewhere around 300,000 Americans, and the concrete production industry employs close to 170,000 Americans. Solar roadways thus needs to make up for close to 450,000 jobs in manufacturing, engineering, and maintenance if it can be a viable alternative for the United States to accept the technology and continue job growth. Unfortunately, projections on exactly how many manufacturing and engineering jobs Solar Roadways can produce  are unavailable due to a lack of empirical evidence.

Furthermore, solar roadways have only been produced and tested as successful prototypes in a small shop setting in Idaho. They have not been produced on a large scale. Solar Roadways has yet to analyze the impacts that different weather and geographic conditions can possibly have on their product. Making the move to mass production will also be a significant challenge. These specialized solar panels are meticulously crafted on a small scale that has not yet been translated into an industrial level operation. Before solar roadways can be implemented, the company needs to iron out all of the possible issues that can occur during mass production or implementation. While issues that can arise by the end of the prototype phase will be figured out, the fact remains that the technology is not viable in its current situation and cannot be adopted by the United States as is.

Finally, strong political interest groups may also prove to be a stumbling block for the energy infrastructure startup. Established industries such as asphalt and big oil have political clout and are certain to lobby against roadways. Oil is one of the largest industries in America and possessed deeply entrenched political power in Washington, rivaled only by the American Medical Association and the National Rifle Association. Solar Roadways, if it wants to find a solid place in America, will thus have to face and combat considerable political clout.

There is no doubt that Solar Roadways has great prospects as a technology. It will help the world reduce its carbon footprint and dependence on non-renewable sources of energy. Solar Roadways offers important improvements to the current system of highway infrastructure ranging from safety to energy efficiency. While nothing is yet concrete for solar roadways in terms of implementation, the potential for solar roadways, especially in a country with thousands of miles of roads like the United States, is limitless.

The biotech industry has been heating up. As of mid-February this year, the number of IPOs by biotech companies in 2014 has nearly reached 20, representing capital raising efforts of over $1.1 billion. During the first biotech boom era, the year 2000 saw the IPOs of 26 biotech companies, raising $1.9 billion.

This year, the IPO class of biotech companies represents a broad variety of biopharmaceutical endeavors, from gene therapy to protein therapeutics to personalized immunotherapies. As a result of an increased appetite for risk on the part of investors, the high uncertainty of a biotech venture has become easier to stomach.

Amsterdam-based uniQure offers the first, and currently, only approved gene therapy product in the European Union. Gene therapy is a promising new form of disease treatment that targets mutated DNA within a patient’s cells. The firm raised $82 million after issuing 5.4 million shares at $17 per share, 21% higher than the midpoint of its filing range.

As a company with a drug that has already been approved, uniQure is much more likely to succeed than other companies that may be in the earlier stages of developing a drug. In fact, in the drug development business, many early-stage compounds will never make it to market. The most promising compounds, after undergoing rigorous testing to ensure they will be safe in humans, take several years to reach clinical stage. Even then, based on past data for the productivity of pharmaceutical R&D, only 20% of candidates entering the clinical trial phase will receive FDA approval.

That biotech is heating up is also evident with the IPO of Eleven Biotherapeutics, which raised $50 million by pricing its shares at $10. Though the shares priced at 28% below the midpoint of the company’s filing range, Eleven’s stock price rose 8.5% on the first day of trading.

While uniQure’s gene therapy drug, Glybera, treats a rare condition called lipase deficiency (LPLD), Eleven’s lead drug candidate can treat dry eye disease (DED), a disease which 26 million patients in the United States are estimated to have. Thus, investors flocked to the stock despite the company having no approved drug, because the larger target market for Eleven’s potential product buoys the probability of commercial success for the company.

Another cause for the increasing investor interest in the biotech market this year is the success of biotech offerings last year.  In fact, while the number of companies that have offered shares in the public markets for the first time this year has surpassed 20, a number which is quite impressive already, it comes on the heels of a record-breaking 47 biotech IPOs in 2013.

Several companies in the 2013 IPO class that have been successful include Bluebird Bio, Aratana Therapeutics, and Foundation Medicine. Orphan drug development company Bluebird Bio raised $101 million on June 19 last year with a per-share price of $17. Now, Bluebird is 38% above its IPO price. On June 27, 2013, animal-care medicine company Aratana Therapeutics priced at $6, raising $35 million. Today, Aratana trades 243% above its IPO price. Third, personalized cancer therapy company Foundation Medicine sold nearly 6 million shares last September 25 for $18 per share, rising 96% on the first day to close at $35.35 per share. Now, Foundation is 76% above its IPO price.

For many investors, the current atmosphere of optimism in the biotech sector is reminiscent of that from a decade ago, after medical research brought visions of leaping advances in the industry. Specifically, April 2003 saw the completion of the Human Genome Project, an international research collaboration that led to the sequencing of the tens of thousands of genes that make up the human genome. With this level of detail and information, researchers could seek further understanding of human disease. Many biotech companies today are working towards the commercial realization of those past advances in medical research.

This year will be a hot year for the biotech sector with no shortage of companies entering the playing field.

Microsoft has been a leader in the software industry for over thirty years, but its reign as a powerhouse built on the sale of computer code may be coming to an end. The company is far from dead, however, as it builds a new future in the increasingly competitive realm of hardware as evidenced by the Xbox, Surface, and now the acquisition of Nokia’s devices and services. The end of Microsoft as a software giant marks its beginning as a hardware player.

The rise of mobile computing has not been kind to Microsoft. The firm built its empire on a combination of sales of the Windows operating system and its Office productivity suite. While the enterprise market remains attached to Microsoft, the personal computer market in developing countries is saturated and consumers have flocked to phones and tablets as the digital wave of the future. Though initially a competitor with its Windows Mobile offerings in the early 2000s, iPhone and Android have decimated Microsoft’s mobile operating system market share, now down to 3.7% this year.

Microsoft has tried valiantly with its new Windows Phone offerings to stay relevant in the smartphone age, but a modern aesthetic and good feature set are clearly not enough to do to the smartphone industry what Microsoft did to the desktop. The executives in Redmond, Washington at Microsoft’s headquarters may be removed from Silicon Valley, but they are not oblivious.

In September, Microsoft showed it understood its failure and Apple’s success. By announcing the acquisition of Nokia, Microsoft will be able to sell a fully vertically integrated product. Google has recently followed a similar approach with its purchase of Motorola as the vehicle for its Android operating system. By making the software and hardware together, Microsoft can make refinements it could only dream of in the past. To paraphrase Steve Jobs’ philosophy behind Apple’s success, they can make it “just work.”

A Microsoft phone running Microsoft software brings an even greater advantage beyond its fit and finish. People don’t buy software anymore. One can’t purchase the operating system of their choice for their phone as they could with their PC. Even on the PCs that remain, Apple’s transition to free distribution of its operating system and productivity suite makes Microsoft’s sales model seem more archaic by the day. One buys the hardware. That’s where the profit is. In the first quarter of 2012, smartphone industry profits were $14.4 billion, much of it going to Apple. In that same quarter, Microsoft made only a profit of $5.7 billion from the entire company. With growing sales every year and large margins on hardware, smartphones are clearly the place to be.

Hardware sales aren’t new to Microsoft. Since launching the Xbox gaming console in 2001, its Entertainment and Devices Division generates billions in revenue annually. Even when people don’t think of Microsoft as a hardware company, its vertical product approach in video gaming has been successful for over a decade.

The acquisition of Nokia and a future of Microsoft-branded phones was hardly unexpected. Lagging excitement at Microsoft from underperforming Windows Phone sales through third party hardware manufacturers and the bungled launch of Windows 8 combined to push CEO Steve Ballmer into retirement and generate speculation for something new.

In 2010, former Microsoft Executive Stephen Elop was appointed CEO of Nokia. As the first non-Finn to head the company and an employee from the ambitious Microsoft culture, some viewed him as a Trojan horse intent on ultimately making Nokia into a subsidiary of Microsoft. His initial decision to have Nokia, a company that failed in its adaptation to the rise of smartphones, discard its homegrown operating system in favor of Microsoft’s new Windows Phone software was a cause of great concern for industry watchers. Windows Phone was untested and Android seemed to be a much better choice. With the argument that he didn’t want Nokia to become another generic Android phone manufacturer, Elop pushed ahead with Windows Phone development, ultimately culminating in the current Lumia line of Nokia smartphones.

Windows Phone was not Nokia’s savior, however, and it has continued to lose hundreds of millions of dollars quarterly. Its stock price has plummeted and Microsoft has appeared as its savior. Stephen Elop recently said, “I feel sadness because we are changing Nokia and what it stands for. And for all of us…there is ambiguity and concern because it is so hard to know what the future holds. But we have to do the right thing.” It’s unclear if Stephen Elop was in fact a Trojan horse, but his agreement to Microsoft’s offer of acquisition and a new position as an Executive Vice President at his old company does not suggest someone who always had Nokia’s best interests at heart.

With its expansion into direct hardware design, Microsoft is aiming to be the next Apple before smartphone buyer loyalties become too cemented. It’s unclear if it will be successful, as few complained of Windows Phone as an operating system or the third-party hardware it was sold on before. At the same time, Microsoft was a surprise success in the video game industry despite launching its first console decades after competitors. Microsoft is still a giant, and it has the reserves to invest and persevere for some time. Whether it can ever make a successful transition to hardware giant, though, is something only time will tell.


Touchless technology is changing how we interact with computers by allowing alternative ways to use existing applications and by providing programmers with the chance to develop new forms of software. Developed by Leap Motion, a San Francisco based technology company, the Leap Motion Controller is one such piece of touchless hardware, and appeals to those interested in exploring these new avenues of technological communication. The device can be purchased from http://www.leapmotion.com for eighty dollars, and as of July 28, is currently selling in Best Buy stores. Lately, the release of the Leap Motion Controller has garnered a large amount of media attention and has received generally mixed reviews. This innovation certainly presents us with new opportunities, but is there a market for this type of product?

The Leap Motion Controller is a small, sleek device that can be paired with a variety of interactive programs. Rick Broida, a journalist for Computerworld, describes the item as “attractive and surprisingly compact,” and with a height of 0.5 inches, width of 1.2 inches, and weight of 0.1 pounds, the controller appears to offer more than its unassuming size would indicate. The product sits below the computer monitor and utilizes near-infrared LEDS and CMOS image sensors to recognize the user’s gestures. After plugging the device into a USB port and installing the necessary software, the controller is ready to function. Users can access the program Airspace to launch applications and purchase additional products from the Airspace store. Some notable applications include Corel Painter Freestyle (a painting program), popular games such as Cut the Rope and Fruit Ninja, Cyber Science (virtual dissection software), and Touchless, a program that enables mouse-free use for the general computer interface.

What is the actual demand for a product such as the Leap Motion Controller, and what is the overall viability of touchless technology in the marketplace? First, the device is relatively cheap and is easy to set up, making the product available and accessible to most categories of consumers. In addition, Leap Motion reviewers consistently comment that the controller performs particularly well in certain niche areas. Michael Steeber from 9 to 5 Mac contends that Leap Motion works well for “natural, interactive gameplay” and MIT Technology Review’s Rachel Metz posits that the technology can improve “computer-aided tasks, like drawing, modeling, and virtual dissections, as well as making it easier to surf the web.” Not only is there potential demand from consumers seeking computer assistance for specialized activities, but also average people can enjoy Leap Motion’s unique function as a gaming device. Overall, it appears that there exists a substantial potential position for touchless appliances in the marketplace.

Leap Motion, however, is not without its flaws. Although the controller worked well with games and other inherently interactive programs, many reviewers found it difficult to use the device for general computer activity, such as browsing the Internet. Ostensibly the controller is able to track the user’s gestures to 1/100th of a millimeter, but reviewers often felt frustrated when it failed to register simple commands. Another issue with the product is that continual use of the controller proves to be a tiring endeavor. Many complain of aches and soreness after holding their arms above the keyboard for an extended period of time.

Although Leap Motion has a few faults, it’s important to understand that the product is currently akin to a prototype and serves as an early example of the potential uses for touchless technology. Additionally, Leap Motion appears poised to enhance the marketplace as computer manufacturers, seeing as Hewlett-Packard plans to integrate this technology in their own products. Looking towards the future, touchless devices seem bound to permeate everyday lives and improve tasks that are unsuitable for the mouse and keyboard.

While the smartphone has taken the world by storm and gained millions of fans and users within the past few years, there is a gadget that promises the same functions – all on your wrist. The battle for customers in the smartphone market will possibly switch to the watch market as many large tech companies like Google and Apple plan to launch their own version of a smartwatch.

So what exactly is a smartwatch? A smartwatch is a watch that connects wirelessly with a smartphone through Bluetooth and takes in wireless information to display news, weather, email, social network notifications, incoming calls, and more. Some smartwatches also offer GPS navigation, fitness tools, gaming, and remote music controls. The smartwatch enables users to receive notifications and control various smartphone functions remotely without having to constantly check their phone.

While they may not have yet popped up on the general public’s technology radar, smartwatches are not new products. They have been around at least as early as 2003 with Microsoft’s SPOT (Smart Personal Objects Technology) watches that delivered information like news and the weather. These watches did not garner much interest due to  high costs and bulkiness, with production ending in 2008 and a #15 ranking on CNET Executive Editor David Carnoy’s “The decade’s 30 biggest tech flops list”. However, there are rumors that Microsoft plans to re-enter the smartwatch market with a new design.

Currently, Sony and Pebble are two of the major players in the smartwatch market. Sony’s second-generation smartwatch, the SmartWatch 2, was released this June and can be used as a regular digital watch, or can be used with Bluetooth and one-touch NFC pairing with Android smartphones to access other features. It has a square color screen where users can view email and text messages, handle incoming calls, and adjust music settings remotely. But while the SmartWatch 2 boasts over 200 apps benefitting from its previous release, it falls short of the Pebble watch in terms of iPhone compatibility.

Pebble Technology, a startup from Palo Alto, launched its smartwatch with a black and white e-paper screen (saving battery life) in 2013. In need of capital, Pebble raised more than $10 million within 40 days from 68,929 backers on crowd funding site Kickstarter, becoming the most highly crowd-funded project at the time. The Pebble watch works with both iPhones and Android devices through Bluetooth to send notifications via silent vibrations. Pebble has arguably one of the most successful smartwatch products on the market, with at least 275,000 orders as of July 11, 2013.

The Pebble lineup of Smartwatches

There has been much fervor in the developing smartwatch market, with the biggest tech companies gearing up for a launch later this year or next year. Apple, Google, and Samsung have all been confirmed to be working on smartwatch products.  Apple reportedly will launch the “iWatch” in 2014, and has already attempted to register “iWatch” as a trademark in many places, such as Taiwan, Russia, Mexico, and Japan. According to Bloomberg, Apple has employed 100 designers and engineers to work on the new iWatch and has filed at least 79 patent applications containing the word “wrist”. Apple’s strong brand image and reputation for sleek and quality products seeks to give a strong boost to the smartwatch market, just like it has with the iPod and the iPhone. Marshall Cohen, an analyst at NPD Group asserts that “Apple can merge fashion with function,” and that a smartwatch from Apple “could triple the size of the watch business in a year or two. They have the opportunity to get everyone that owns a cell phone to go out and buy another watch.” Compatibility with the iPhone and other products is sure to be a plus with consumers as well.

Could this be the new iWatch?

Google has already delved into the wearable technology market with the Google Glass. Motorola, owned by Google, has put out its own version of the smartwatch with the Motorola MotoACTV sport watch, which can be connected with a smartphone but also has some autonomous fitness functions such as GPS exercise tracking and step tracking.

Samsung will reportedly launch a Samsung Galaxy Gear smartwatch, to be announced at the same time as the Galaxy Note 3 in September this year at the IFA (International Franchise Association) conference in Berlin. Google and Samsung will most likely pair their watches to be compatible with Android devices and apps.

Avi Greengart, an analyst at Current Analysis research firm, states “2013 may be the year of the smartwatch” as “components have gotten small enough and cheap enough”. ABI Research, a market research and intelligence firm, projects that more than 1.2 million smartwatches will be shipped in 2013. But will the line of new smartwatches be able to garner more than the modest successes of its predecessors? It is debatable whether the average consumer will believe a smartwatch is necessary when he/she has a smartphone. A smartwatch is more of a smartphone accessory, and a smartphone has access to all the same functions and more with a larger user interface. But on the other hand, having these smartphone functions on a watch could make life easier, especially in situations where it’s more polite to glance down at your wrist versus taking out your phone. Manufacturers will have to tackle the challenge of creating display sizes that are large enough to be useful yet still stylish enough to attract both men and women. Adequate battery size and pricing could be concerns as well.

As for now, Pebble’s smartwatch priced at $150 is available at local Best Buys, with other smartwatches sold mostly online. We can only wait and see whether the upcoming lineup of smartwatches will live up to the hype.