Spending on technology acquisitions moved at a record pace in 2018. Technology giants including Microsoft, SAP and Salesforce announced billion-dollar deals after not completing any in 2017. Private equity firms spent more money on technology sector acquisitions than ever before. These strategic and financial acquirers focused on the top of the M&A market, completing a record 108 deals of more than $1 billion in value. The stage was set for 2018 to be a historic year in technology acquisitions.

Winter threw a wrench into the celebration. An equity market selloff during Q4 2018 wiped out annual U.S. stock index gains. Global market exchanges were damaged. Corporate earnings normalized, with 2018’s 20 percent growth rate expected to be halved in 2019. Credit became harder to obtain as the Federal Reserve hiked rates for the fourth time.

Mounting corporate concerns led deals to slow down in Q4, leaving 2018 one large deal short of a historic year for technology acquisitions. Though some of the underlying instability that led to Q4’s disappointment persists, 2018’s transaction momentum will carry over due to the strengthened emergence of two key themes.

The technology sector’s movement from a vertical to horizontal market has caused companies from every sector to become involved with M&A. Private equity firms sitting on record levels of dry powder-money firms have not yet invested-have closed technology deals at an unprecedented rate. While macroeconomic concerns are prominent, the technology M&A cycle will persist in 2019 with the continued entry and development of private equity firms and diverse strategic acquirers. Macroeconomic conditions pose a threat to growth but will ultimately fail to drastically slow deal flow.

Technology sector acquisitions by non-technology companies are a strong driver of continued M&A performance in 2019. Software companies have remained disruptors of all sectors from transportation to fintech. Traditional businesses fighting off disruption find developing technology in-house costly and risky as dynamic startups often move faster. For these non-technical companies, purchasing other firms proves a much better process. These transactions are ubiquitous; from Walmart’s purchase of Jet.com to General Motor’s nine percent stake in Lyft. These examples from 2016 demonstrate that technology companies have been pressuring other sectors for years. The pace of disruption, however, is growing into 2019.

The boards and management teams of companies outside the technology sector have well understood the destruction of established corporations at the hands of technological disruption. These company leaders have reacted aggressively to the trend of mass displacement by the likes of Amazon in an effort to survive. Due to the painstaking nature of developing technology in-house, industry leaders maintain their stronghold through the acquisition of the technology companies that threaten their market share. This driver will persist as more firms create advanced underlying technology that provides clients with seamless adoption and a clear return on investment (ROI). Before the prevalence of cloud-based software, enterprise software was incredibly costly to adopt, providing industry stalwarts with a layer of protection against technology. Today, the velocity of software adoption is unprecedented, threatening to crush non-technical companies that fail to move quickly.

Beyond the need for survival, strategic acquirers are becoming more comfortable with the threat and pervasiveness of their technical competitors. As consumers across all sectors become accustomed to technology, traditional firms become more confident in adopting new innovations. Technology has created new business models that lead to quicker product adoption, shorter time to market and faster product iterations. With the prevalence of technology M&A, traditional companies have grown more comfortable with forward-looking valuations centered around growth. As technology continues to move from a vertical to a horizontal, firms will grow at faster rates and increasingly threaten disruption of traditional companies: strategic acquisition outside of the technology sector will persist as a driver of technology M&A activity into 2019.

Increased private equity involvement in technology accelerated the M&A cycle in 2018 and will continue to drive growth in 2019. According to the Financial Times, 75 percent of the most active buyers of $100M+ technology companies were private equity firms in 2018. Firms like Silver Lake, Francisco Partners and Vista Equity dominate the technology M&A space—not Google, Netflix or Microsoft. According to KnowledgeBase, private equity firms participated in over 2,700 technology transactions from 2016 to 2018, solidifying their role as market makers in the technology M&A space. In 2017, financial acquirers had a historic year, out-buying strategic acquirers. This trend continued into 2018, with KnowledgeBase reporting 1126 tech deals by private equity firms compared with 846 by U.S. strategic acquirers.

One key explanation for the dominance of private equity across all M&A sectors is the record-levels of dry powder that they are sitting on. Pitchbook estimates dry powder totals over $1 trillion among private equity firms, with smaller growth equity funds sitting on over $500 billion.

Alongside a massive amount of available capital, the rapid movement of financial acquirers into technology is driven by an ideological shift. Private equity firms have thrown out their traditional playbook for technology, increasingly purchasing companies that are not generating any cash flows and even burning cash. Private equity firms began entering technology near 2005 when they viewed technological investments as they did all others—regardless of the industry, they targeted large companies that generated high cashflows but were experiencing limited growth. Cashflows of an acquisition target are critical in private equity because they support the leverage used to fund the acquisition. This model has since been dramatically updated. The valuation environment has become incredibly rich, with many in the industry proclaiming “20x EBITDA is the new 10x EBITDA”. The new attitude of financial acquirers dramatically expanded the companies these firms could target, leading to larger and more frequent buyouts.

The ideological shift of private equity firms extends to strategy. Some of the largest private equity funds are playing a larger strategic role within their acquisition targets, expanding upon their traditional strategies of buying and building companies. Generalist funds, too, are staking their claim to technology companies. As technology firms become more horizontal, generalist financial acquirers understand more product ecosystems. This allows them to become more competitive, develop a hypothesis and hire operating partners to add experience.

The themes of rich valuations by growth-centric companies were present in late 2018 when Vista Equity took both MINDBODY and Apptio private—the firm paid over 8x trailing sales for both software companies, both of which were incredibly growth-centric and burning through capital.

While the entrance of traditional non-technology companies and private equity firms into the technology M&A space spurs optimism for the industry’s continued expansion in 2019, a volatile macroeconomic environment has the potential to both improve and harm the cycle. An EY survey of TMT M&A participants found that shifting geopolitical and regulatory landscapes were the leading cause of deal failure in 2018. The survey found that the volatile macroeconomy is forcing executives to be disciplined—96 percent of executives reported walking away from a deal in 2018 as opposed to 76 percent in 2017. Macroeconomic concerns including volatile global equity markets, reversals of capital flows to emerging market economies, historically rich M&A valuations and increasing protectionism against cross-border M&A deals all threaten the persistence of the technology M&A cycle in 2019. In Deloitte’s 2019 M&A Report, technology executives cited an increased desire to acquire for the purpose of talent acquisition, likely a result of a tight labor market and historically low unemployment.

Macroeconomic concerns may also benefit technology transactions in 2019. Tax legislation and a looser domestic regulatory environment helped M&A in 2018—corporate tax rates were slashed from roughly 35 percent to 20 percent, and penalties in bringing overseas profits home were largely eliminated. This looser regulatory environment spurred deals in 2018 and will continue to boost corporate confidence in 2019. While the Federal Reserve hiked rates four times in 2018, the cost of borrowing is still historically low. Cheap financing and a relatively strong equity market will contribute to increased deal flow in 2019.

Alongside macroeconomic uncertainties, the theme of private equity dominance that will continue to drive technology acquisition in 2019 could have major implications for investors, entrepreneurs and analysts. The IPO market is likely to be affected as technology companies choose private ownership over public scrutiny. Unicorns, including Uber and Lyft, waited for incredibly high valuations before even posturing to go public. Further, Qualtrics and Appdynamics opted to sell in 2018 over pursuing rumored public offerings.

While private equity firms are increasingly accepting of acquisition targets burning cash, they still push for companies to reach profitability sooner. As these financial acquirers further entrench themselves in the technology space, entrepreneurs may focus on positioning themselves for private equity investment. Pragmatic innovation, combining a balance of growth and profitability, may become a dominant strategy among technology companies seeking an exit. The themes of private equity and non-technology firm dominance in the technology M&A space are here to stay. Only time will tell whether the technology industry will slow from a shift to pragmatic innovation or grow through private equity’s ability to pursue long-term, operationally intensive strategies and avoid the public market’s fixation on short-term results.


Recent technological development has impacted the gaming industry by expanding the capability of existing platforms, creating exciting new channels and challenging developers to think creatively about alternate forms of consumer experiences. Of these changes, augmented reality (AR) and virtual reality (VR) seem to be the most prominent upcoming frontiers for gaming. Modern examples of commercialized games demonstrate the tremendous potential of these technologies, which have come a long way from their purely theoretical and academic roots. According to statista.com, the global AR and VR market will reach over $200 billion in 2021. The potential for substantial growth is driven by the emergence of new types of experiences; the gaming industry will likely see increasingly immersive and imaginative content in the near future. AR and VR are expanding the limits of what was previously considered possible in games and captivating consumers in new ways.

AR enhances a user’s view of reality by superimposing digital images to create a unique experience. While the technology has existed for decades, primarily for research and digital marketing, its accessibility to the average consumer was only recently made possible by the drastic advancements seen in mobile devices. The ability of smartphones to run AR-enabled software that combines visual, audio and geolocation sensors has created unique experiences that are both engaging and refreshing for gamers. With sufficient technology in place, developers are only beginning to explore the various commercial applications. New categories of the industry may emerge, such as the combination of exercise and gaming. As active lifestyles are becoming more popular in mainstream culture, these types of games aim to gain traction with new cohorts of consumers: those trying to start a daily routine, or those just hoping to make their existing routines a bit more fun. Thus, the total addressable market is well-positioned to expand: statista.com estimates the global AR market will grow to approximately $90 billion by 2020.

Perhaps the best recent application of AR in games has been Pokemon Go, which Niantic launched in summer of 2016. Although wildly successful at first, PoGo’s use of AR is actually rather basic: overlaying mostly-stationary creatures on the phone’s camera preview screen and allowing users to capture them by throwing virtual balls. More recently, it has added an “AR+” mode for iOS that anchors the creature within the environment even when the camera moves around – this feature allows users to see different angles and move closer for more accurate throws. Although its explosive growth in the first few months is arguably the result of hype from riding the coattails of the legendary Pokemon franchise, it also demonstrated that AR holds both a viable and promising future in gaming. The substantial demand is driven by various factors: consumers are fascinated with the immersive playstyle and exercise in a fun way as they continuously move around the real world. As a business, Niantic generates substantial revenue through microtransactions. According to SuperDataResearch.com, Niantic made almost $900 million in revenue over 2017 from the game; as of April 2018, PoGo is the number one grossing Android app per Google Play listings. Overall, this demonstrates the compatibility of AR mobile gaming with existing monetization strategies and undoubtedly encourages the development of similar types of games.

A close relative of AR, VR also uses computer-generated images to create new user experiences. The key difference is that VR seeks to supplant reality rather than supplement it: the idea is to completely simulate an environment and immerse the user in this artificial world, which requires overcoming a much higher technological hurdle. Fortunately, engineering efforts over the last century in optics, computers and software have finally enabled users to experience the magic of what Palmer Luckey (founder of Oculus VR) and others are calling the last great human invention: the “final platform.” A 2017 statista.com survey conducted in the US stated that over 90 percent of participants have heard of the term “virtual reality” already, and the majority mentioned that the most interesting aspect of VR was the feeling of entering another world.

Even within the context of gaming, the spectrum of VR hardware is broad. After all, 3D simulation can be accomplished in many ways. From an accessibility perspective, smartphones are an easy entry point into VR games since most modern ones are powerful enough to run them. There are many headsets on the market today that are simply smartphone head mounts which use optical tricks to create a basic VR experience. Google Cardboard accomplishes this for only $15, but more durable sets like Daydream and Gear VR are still only a fraction of a smartphone’s cost. These headsets are a relatively small purchase for any curious consumer who already owns a smartphone. With more than a third of the global population estimated to be using a smartphone by the end of 2018 according to a recent forecast by eMarketer, the untapped market of VR mobile gaming is substantial.

On the other end of the accessibility spectrum lies more advanced and customized VR equipment. For $400, an Oculus Rift kit will provide a much more sophisticated experience than those like Google Cardboard. These goggles pack in more specialized sensors than a smartphone and rely on the PC they’re connected to for the necessary processing power to generate images. Similarly, consoles like the PlayStation are offering their own VR headsets and supporting accessories such as wands that enable more interaction with their virtual worlds. These purchases are quite substantial for any consumer, especially when they may become outdated in just a year or two.

With the variety of VR hardware described above comes a variety of games. For example, well-known titles such as Skyrim, Fallout and Minecraft have VR-enabled expansions for consoles and PC that offer  players their familiar environments in a more intimate way, helping draw attention from veteran gamers to the growing VR gaming campaign. Generally, demand from consumers seeking a fresh perspective from modern games has led to promising results: International Data Corporation expects sales of over 12 million headsets in 2018 versus 8 million in 2017, attributable to changes in hardware, software and pricing.

The variety of emerging VR games demonstrates the rapidly growing commercial interest, as developers are shifting their focus to meet demand in this promising new space. As with many emerging markets, the first step after broad consumer accessibility is a land grab. Tech giants like Facebook, Apple, Google and Microsoft have made significant investments to capture market share. Google acquired Owlchemy Labs (a VR game studio) in 2017, Apple bought Metaio (an AR software startup) in 2015 and Facebook paid $3 billion for Oculus VR in 2014. Sony made substantial investments to develop Playstation VR, similar to Microsoft’s HoloLens for AR. Now that their hardware is in the hands of consumers, they will seek to hold consumer attention by developing more interesting games. Just like with AR, it seems that VR games today are just the tip of the iceberg. The traditional experience that kept gamers glued to a screen and clutching controllers is shifting towards a more dynamic one, where users wearing headsets continuously look around as they slash their virtual swords and fire their laser rifles.

Both AR and VR are offering users fresh, new experiences that continue to surprise, impress and captivate. Over the next few decades, refinement of VR will likely enable mirroring of reality and beyond, allowing us to explore with all our senses places that have captivated our imaginations ever since childhood, such as space. Traditional video game genres like first-person shooters will become more immersive, while sports games could offer first-person perspectives to any ordinary player. Professional athletes could benefit from AR goggles that show real-time trajectory calculations and monitor vitals. Popular outdoor hobbies like fishing and hunting could be replaced by simulators that offer the exact same sensory experience without any devastating ecological consequences. With all the possibilities, the line between virtual gaming and life begins to blur and traditional gaming companies will evolve to provide solely high-tech entertainment. Even though development still has a long way to go, if the efforts of the industry to date are any indicator, we have plenty to look forward to.

On a calm Saturday morning in September, Indian Prime Minister Narendra Modi and many of India’s political elite landed at San Francisco International Airport. The Bay Area’s tech titans anticipated his arrival for weeks. Over the course of the next two days, he received a VIP treatment unlike any, from the likes of Elon Musk, Mark Zuckerberg and Tim Cook. For Modi, this trip was an opportunity to expand his country’s business potential and encourage bright Indians to stay back home. For the tech titans, it was an opportunity to break into the fastest growing market on the planet.

Modi is a controversial figure in his homeland. In his previous tenure as Chief Minister of the state of Gujurat, he was condemned by many for not intervening in the state’s 2002 riots, in which over a thousand people died. This led to many decrying him as unsuitable for the country’s highest political office. Even after being elected, Modi continues to face opposition from a variety of parties and political groups for his supposed Hindu nationalism in a country where, according to Qatar-based publisher Al Jazeera, over 200 million Muslims reside.

In spite of this, his desire for a strong pro-business environment to expand India’s economy has garnered him a wide range of support. His push for expedited technological progress and fight against anti-corruption in a country where such behavior is commonplace catapulted him to the top of the general election race in 2014. His visit to Silicon Valley in September sent a clear message to his detractors that he was resolute on creating an Internet-connected and tech-savvy India.

For the Silicon Valley CEOs, Modi’s visit was important for business opportunities. According to the Organization for Economic Co-operation and Development, over the past seven years India’s real GDP growth has averaged nearly 6.9 percent per year, which suggests India has potential to be a strong market in the future. But the real reason for India’s increased attractiveness to Silicon Valley titans can be seen by the perceived limitations of China.

China’s recent economic stagnation (India has had three consecutive quarters of higher growth), and its political barriers have restricted the success of many technology firms. Facebook has been banned from the country since 2009; its 2014 WhatsApp acquisition was largely the culmination of five years of efforts to regain access to China’s 600 million Internet users. In spite of Mark Zuckerberg’s efforts, according to Statista, WhatsApp maintains a measly 23 million users in the mainland, or only about four percent of mobile Internet users. Similar numbers appear for other technology companies: Google maintains only about an 11 percent market share in the search market and Twitter is banned inside the country, reducing its effective user base almost immediately. Chinese competitors, such as Baidu and WeChat, have also grown from Western clones to highly sophisticated services specialized to meet the needs of the Chinese market.

Contrast the above scenario with India. According to The New York Times, India will reach nearly 168 million smartphone users this year, with 300 million Internet users overall. Facebook already has 138 million users in India, trailing only the United States. A similar situation arises for Google India, which is behind only the United States by the number of searches. But it is the prospect of the unconnected billion plus users in the country, many of which have never been on the Internet, that is a tantalizing prospect for Silicon Valley. Modi’s commitment to improving India’s technological infrastructure and his allowance for companies such as Twitter and Google to send out real-time cricket scores via text and lay down fiber-optic networks, respectively, represents a departure from previous administrations.

Even though sizable technological upgrades are needed in the country, Silicon Valley’s most prominent CEOs believe the penetration opportunity is worthwhile and have already begun to invest in development initiatives. Google’s CEO, Indian-born Sundar Pichai, has announced that his company will provide Wi-Fi service to 400 train stations across the country, serving over 10 million passengers daily. To combat slow service speeds, which can run at a hundredth of the speeds Americans have, Google is compressing web pages on its servers, to use 80 percent less data and load four times as fast; YouTube videos can now be downloaded on a Wi-Fi network for offline viewing. To connect with lower-income Indians, Facebook has spread the reach of its Internet.org project, which provides free access to a package of mobile apps on mobile apps. Google, Facebook, and Twitter have all added support for various Indian languages (India has over 20 official languages).

But while Modi’s visits have created a wave of excitement in the United States, the response back home has been mixed. The start-up and entrepreneurial community in India has been muted. The country has long battled a “brain drain,” a so-called emigration of its brightest minds to other countries in search of better opportunities, as evidenced by the half-million foreign Indian workers, according to The New York Times, in Silicon Valley. There are also fears that preferential treatment of U.S. tech companies could make it more difficult for home-grown entrepreneurs to draw traffic to their own websites.

These fears are not completely unjustified. According to the Financial Times, Uber is decried in India as an example of a foreign company overstepping its bounds. A national outcry occurred in December, when an Uber driver in Delhi was convicted of raping a woman. Afterwards, it was discovered that Uber did not conduct background checks on its Indian drivers and did not have panic buttons in their cars. Uber has been restricted since, and local taxi unions have lobbied with increasing force against the ride-sharing company’s expansion.

Nevertheless, all current signs point to a bright future for the Indian technology sector. According to a Nasscom-McKinsey report, the overall market is expected to grow from $132 billion currently to $225 billion by 2020 and nearly $350 billion by 2025. Domestic consumption of technology-related goods and services over the same period is expected to double from $34 billion to over $70 billion. It should be noted these figures did not take into account the aggressive expansion that Modi’s alliance with Silicon Valley could bring to the Indian subcontinent. Their ambitious plans could accelerate growth at a far faster pace.

In previous years, this growth was limited through complex bureaucratic regulations and a dearth of foreign capital. An important tenet of Modi’s election campaign was his promise to cut said regulations in the hopes that foreign firms will prioritize India. His warm reception in Silicon Valley and the persistence of companies like Uber to gain a footing in the subcontinent indicate that the interest is there.

Although it remains to be seen when the world’s largest technology companies can make a sizable impact in India, the payoffs for both India and Silicon Valley could be enormous.


Think back to the last time you heard someone mention a home phone, or even the word landline.  Months ago, maybe even years ago?  That may be an exaggeration, but the demise of the home phone is not.

The copper-wired landline communication structure that used to dominate US communication for over a century is witnessing a sharp decline.  According to a survey done by Stephen J. Blumberg, Ph.D., and Julian V. Luke of the Center for Disease Control and Prevention, approximately 58 percent of American households had a landline system in 2009.  By mid 2013, however, the number had dropped to 49 percent. Additionally, approximately 24 percent of households had wireless-only homes in 2009, while over 39 percent boasted wireless-only households in 2013.  These quickly dropping prevalence rates vividly underscore the widespread shift from landline communication systems to cellular devices.

Young adults are the most likely age group to choose not to install landlines into their homes and opt for what are now called “wireless-only homes.”  According to Victor Luckerson of TIME, people between the age of 25 and 29 use cell phones exclusively.  Additionally, Luckerson reports that “Americans between 30 and 34 were the next largest group of cord-cutters, with 60 percent of them living in wireless-only homes. 53 percent of people between 18 and 24 are now cellphone-only, while 48 percent of people aged 35 to 44 and 31 percent of people aged 45 to 64 have made the jump.”

The decreasing dependability of the landline system has been cited as an additional catalyst to its demise.  According to an estimate by the Federal Communications Commission, as many as one in five inbound long-distance calls do not connect. Rob Frieden, professor of telecommunications and law at Penn State, explains that “the switches — the actual infrastructure (used in the classic home phone system) — are reaching end of life” in step with an aging engineer population where few new employees are taught to support the gargantuan switches.

States, too, are encouraging the abandonment.  According to Jennifer Waters of the Wall Street Journal, in March 2014, Michigan joined more than 30 other states that have passed or are considering laws that restrict state-government oversight and eliminate “carrier of last resort” mandates.  These laws would effectively end the universal-service guarantee that gives every U.S. resident access to local-exchange wireline telephone service, Waters claims.  Many states are also eliminating copper-based technologies, materials fundamental to the foundation of landline systems, and replacing them with fiber technologies. According to Meghan Damico of Black Box, relative to copper-based technologies, fiber technologies provides greater bandwidth, allow for greater speed and distance for data and have greater security and immunity from potentially harmful environmental factors. California, Texas, Florida, Georgia, North Carolina, Wisconsin and Ohio have all agreed to dispose of copper-based technologies in the next three years, while Kentucky and Colorado are weighing similar decisions, according to Waters.

States’ actions like these represent a growing dissatisfaction with the pre-existing communication laws that are preserving the last bits of landlines. State and telecommunication companies displeasure with current communication laws stems from the inability to replace the POTS, the plain old telephone system, with Internet Protocol-based systems that use only one broadband network for Internet access, cable programming and telephone service.  John Stephenson, director of the Communications and Technology Task Force at the American Legislative Exchange Council, explains the problem:  “Those [rules] were written at a time when consumers had no choice in the matter.  If we were to clear the underbrush of these rules written long before the Internet was even a word, there would be a lot more broadband deployed to the United States, and things that are even better that we can’t conceive of today.”

graph 123

Unsurprisingly, telecommunications giants have also had a part in eliminating landline communication systems.  According to Waters, AT&T and Verizon Wireless are lobbying multiple states to dispose of the simplistic and old-fashioned telephone system.   Additionally, telecom companies, with the permission of the FCC, are conducting trials of the new system in select parts of the country.  According to John Brodkin of the New York Times, “the F.C.C. is proceeding carefully, letting AT&T conduct trials of the new phone system with willing consumers in parts of Alabama and Florida, that will help it measure quality and determine what kinds of rules to apply to the system.”  In essence, the new phone system is a desirable substitute because it would allow telecommunications companies to implement only one broadband network that would power the Internet, cable and telephone all in one.

While more and more people are opting for lives without landlines, they still have some concrete benefits over cellphones.  Landlines are easily more reliable than cell phones as they do not need to be charged or experience fluctuating connection strengths.  Additionally, security, particularly the lack of location data for 911 calls, is a serious concern with mobile phones.  Unlike landlines, which transmit location data automatically to 911 dispatchers over a hard-wired connection, mobile devices lack the automatic location transmission that landlines boast.  As John Kelly and Brendan Keefe of USA Today explained, “After the call comes in, the dispatcher’s computer transmits a digital request to the cellphone network seeking the phone’s location. The data exchange can take seconds or even minutes. Sometimes, it doesn’t return a location at all.”  The difference in costs between home phone systems and mobile devices has convinced many families to keep their landlines:  Current landline systems are incredibly cheap systems that sell as low as 10 dollars.  In contrast, popular mobile devices such as the iPhone and Samsung Galaxy are selling for more than 500 dollars, according to Ian Linton of EHow.

While landlines do boast many benefits unparalleled by cell phones, the facts are indisputable.  People are increasingly switching away from home phone systems, and cellular devices are moving to the forefront of American communication.  While businesses will most likely continue using landlines, it is only a matter of time before American technology renders the personal landline obsolete.


Is this tech boom another bubble? The similarities are easy to spot. Snapchat recently obtained a $19 billion private valuation, and as of right now the company has no established revenue streams. Examples like this can be perceived as clear signs that valuations are once again spiraling out of control in the tech industry. Moreover, according to data from the National Venture Capital Association and PricewaterhouseCoopers, venture capitalists pumped $48.3 billion into 4,356 deals last year (the most since 2000), while venture financings of more than $500 million hit a six-year high last year. Similarly, this April, the Nasdaq soared past the 5,048.62 points record set during the dotcom bubble for the first time.


Big names in the VC industry, including Andreessen Horowitz cofounder Marc Andreessen and Sequoia Capital chairman Sir Michael Moritz have openly expressed their concerns about the bubble they believe is forming.  Another familiar name, Mavericks’ owner Mark Cuban, claims that this bubble is actually worse than the one that took place in 2000, mainly as a result of the transition of general public investments from public to private ventures, which he believes has eliminated the liquidity of those investments.

But perhaps things are not as bad as Mr. Cuban would like us to believe. Given the resemblance with the dotcom bubble, one has to wonder whether we actually learned the lesson or if we are about to fall into the same hole for the second time. While there is not a definite answer to this question, one thing is clear: things have changed. It has been 15 years since the dotcom bubble burst, and it has taken a long time for the venture capital industry to recover. The data show that both the industry and entrepreneurs have modified their approach by focusing on proven profitability and waiting longer before going public.

The public market is highly volatile and often irrational. For companies with a shaky business model and uncertain future revenue this might be beneficial in the short run, as unsubstantiated “hype” and excitement about potential growth might allow them to attract large amounts of equity.  However, the 2000 crash that resulted in a total loss of $5 trillion made it clear that this is also the perfect recipe for disaster, because as soon as doubts arise things can get very ugly very fast (the Nasdaq dropped 78% from March 2000 to October 2002). The disappointing aftermath of Facebook’s IPO (the stock fell 50% over the first six months) convinced entrepreneurs that it is not a good idea to take a company public before its business model and profitability have been proven to be sound, even for firms with high chances of profitability in the future.

As a consequence, startups are opting to stay private for significantly longer, with average years to IPO increasing from 3.1 to 7.4 and average revenue at the time of IPO going from $35 million to $102 million over the last 15 years (Suster). Patience has indeed proven to be beneficial for up-and-coming tech companies. Today, firms are more seasoned by the time they go public, and a remarkable result of this approach has been a significant increase in the share of profitable technology companies in the market (from barely over 50% in 2000 to 90% in 2014) (Richardson). Entrepreneurs have learned the hard way that Wall Street analyzes firms very differently from private valuation experts, with the former being sharply focused on short-term profits and revenue, while the latter places more emphasis on long-term growth and market potential. Growing startups would much rather avoid Wall Street’s impatience and the scrutiny associated with having publicly traded stock.


This begs the question: what has changed in order to allow the race to success to change from a sprint to a marathon? The answer is the venture capital industry, which has  provided the money companies need to remain private. The consequence has been average late-stage funding skyrocketing to levels only seen during the dotcom era.  While some see this as clear sign of a bubble forming for its resemblance with the late 1990s trends, most experts would agree that late-stage funding is simply replacing IPOs for fundraising in companies over the extra time they stay private. This opportunity to capture extra value in the private markets has led some hedge funds and other major non-private-market investors to become late-stage VCs. Even J.P. Morgan has jumped on board by developing debt products for high-flying startups that do not think they are ready for IPOs.  The reason is that many investors lack the skills, the time or the experience to make great, patient, long-term, private-market investments and established late-stage companies (that in the past would have certainly gone public) are simply much safer bets. As a result, 66 percent of venture capital funds are now concentrated in late-stage investment.



A perhaps more legitimate concern arises from the fact that private valuations have been soaring out of control, with the combined valuations of the Top 30 US startups ballooning from $78.8 billion in March 2014 to $181.2 billion a year later. Although in theory they are meant to be based on revenue and EBITDA multiples, valuation of startups are often not based on fundamentals. According to Randy Komisar, a partner at venture firm Kleiner Perkins Caufield & Byers: “these big numbers almost don’t matter… [they] are sort of made-up. For the most mature startups, investors agree to grant higher valuations, which help the companies with recruitment and building credibility, in exchange for guarantees that they’ll get their money back first if the company goes public or sells.” Public valuations on the other hand has shown very positive improvement since the dotcom days, with average price to earnings ratio in the market going from roughly 200 down to about 23.



Since the Industrial Revolution, innovation and new technology have increased the efficiency of many different fields, promoting society’s overall growth. However, the introduction of these technologies has also induced much controversy, manifested in the outcry over diminishing job numbers and big-screen science fiction catastrophes.

It is no secret that robots are ever encroaching on our jobs. A recent study from the Oxford Martin School’s Programme on the Impacts of Future Technology showed that 45% of jobs today will be computerized in the next few decades. The most vulnerable fields, according to the report, are “transportation/logistics, production labor, and administrative support… services, sale, and construction.” Further, the report suggests that advances in artificial intelligence could put traditionally secure jobs in management, science and engineering, and the arts at risk of automation.

These advances in technology, in turn, will require specialized skill sets, bringing higher education into the equation. Advanced schooling will be required in order to understand the concepts needed to keep up with changing standards. Employers will gravitate toward potential employees who have the education required to operate, and even create, new innovations. In short, as the basis for technology changes, the base requirements will change as well.

With total student debt reaching more than 1 trillion dollars, it’s not hard to see why there is much debate today about the value of a college education. Former U.S. Secretary of Education William Bennett recently published a book, Is College Worth It, advocating for families to reconsider their reasoning for sending their kids off to get a BA and to consider alternatives such as community college.

On the other hand, in MIT Economics professor Dr. Autour’s study on the wage gap between college graduates and everyone else, “not going to college will cost you about a half a million dollars.” Other than wages, there are more difficult to measure benefits of going to school as well, including networking, prestige, and experience. David Leonhardt of the New York Times contends, “a four-year degree has probably never been more valuable.”

However, Leonhardt’s article does concede one important point: “Tellingly, though, the wage premium for people who have attended college without earning a bachelor’s degree — a group that includes community-college graduates — has not been rising.” This creates a distinction that has important implications in the struggle to adjust to automation.

Low-skill workers and skilled workers are affected differently

Tim Worstall, a contributor to Forbes and a Fellow at the Adam Smith Institute in London, explains that automation will only slightly increase the currently high job turnover rate, implying that displaced workers will be able to find new and/or newly created jobs. However, the assumption is that these workers have the capability to adapt to these new jobs. According to the Oxford Martin School report, low-skill workers, on the other hand, “will reallocate to tasks that are non-susceptible to computerization—i.e., tasks that required creative and social intelligence.”

The necessity (and ambiguity) of creative and social intelligence further complicates the price tag of a college education. Families are now left to decide whether or not a four-year college is the only means to acquire such skills, and if thousands of dollars is the right price tag.

Most of the aforementioned statistics support the assertion that a college degree is worth the cost. However, Ben Casselman of FiveThirtyEight contends: “just because people who graduate from college are better off doesn’t necessarily mean that going to college is a good decision.”

The rationale for students and their families must include not only economic costs and benefits, but also consideration of the likelihood of obtaining a degree. As Casselman rightly points out, data from the National Center for Economic Statistics shows that less than 60% of “first-time full-time bachelor’s degree-seeking students at 4-year institutions” graduate within six years. That number drops to 38.6% for those that graduate within four years. More time at school also incurs more costs.

Leonhardt writes, “As the economy becomes more technologically complex, the amount of education that people need will rise.” What seems to follow is that the income gap between four-year college graduates and all others will only continue to rise.

For jobs, automation will only continue to do what it has been: induce an uptick in turnover rate. However, the means of getting to those jobs, education, is complicated. The volatility, uncertainty and cost of a college education understandably make many prospective students balk. Attending college remains a decision unique to each individual depending on his or her circumstances. What technology will continue to do is make students think hard about alternatives. College no longer seems as sure-fire as recent trends have shown.


It’s no secret that since Job’s passing just over three years ago, Apple has slowed its rate of innovation. Instead of creating new devices at its usual pace, Apple has been releasing derivative products that are themselves increasingly unchanged. These are not encouraging signs from a company whose success is based on its ability to create innovative products.

Given this situation it was no surprise when Apple’s new and current CEO, Tim Cook, became the scapegoat for the company’s lower performance. Cook has been blamed for everything from lacking vision and exhibiting poor leadership to making faulty business decisions and selling out Apple’s values.

While these criticism may be true to some extent, it’s important to recognize that Jobs’ abilities and importance to the company tend to be overstated and Cook’s understated. Only by looking beyond the two CEO’s can we appreciate the new competitive environment Apple finds itself in as well as its impending institutional shift.

Ghost of CEOs past

One of the many things Jobs established during his time at Apple was a cult of personality around himself. It was Jobs’ ideas which inspired the company, Jobs’ personality which made the company work, and Jobs’ leadership that kept the company successful. Jobs was Apple; Apple was Jobs. A permanent spotlight thus became focused on Apple’s CEO.

When Cook became the new holder of that title in August 2011, the spotlight transferred onto him. Unfortunately the cult of personality didn’t – that remained centered around Jobs. Though Cook claims that Jobs told him “I never want you to ask what I would have done. Just do what’s right”, few have resisted the temptation to compare. Combined with Apple’s downtrend, it was only a matter of time before Cook became the disappointing younger sibling.

Of course, no Apple product has ever been the work of only a single man, even if that man is named Steve Jobs. Indeed, Jobs worked closely with Jonathan Ive, Apple’s Senior Vice President of Design and the man who Jobs called his “spiritual partner”. As the designer of the iPod, iPhone, iPad, and other Apple products, Ive can be described as Apple’s other creative genius. And while some may argue that it was Ives andJobs which made Apple products so unique and successful, it’s very unlikely that Jobs handpicked Cook to be his successor without considering Cook’s abilities as an innovator. At a minimum Ive and Cook are approximate to Ives and Jobs.

Furthermore, the impact that Jobs had has often been exaggerated. Jobs seems to have  actively presented himself as playing a larger-than-life role at Apple. According to Walter Isaacson’s acclaimed biography of Jobs, Jobs regularly took personal credit for other people’s ideas. And because Jobs was the face of Apple, it was Jobs – not an anonymous Apple engineer – that consumers believed attended to the details of each device. The list of people from whom Jobs pocketed ideas include Ive and cofounder Steve Wozniak. As Jobs put it, “Good artists copy, great artists steal.”

With these items in mind, it becomes clear that the disparity between Cook and his predecessor isn’t as wide as critics make it out to be. Indeed, if we believe so much in Jobs, we should also believe in his choice of Cook as his successor. But if the identity of Apple’s CEO isn’t the cause of Apple’s slowdown, what is?

The “Cult of Mac”

To understand why Apple hasn’t been keeping up at its normal pace, one must first understand its customers. In a paper entitled “The Psychology of Intuitive Forecasts of New Product Utility”, authors Robert Meyers of the University of Pennsylvania’s Wharton School of Business and Shenghui Zhao of the University of Miami’s School of Business Administration explain that consumers make decisions about whether to buy a new product based on projections of its benefits. Since the reality of these guesses can only be observed after the product has been purchased, Meyers and Zhao argue that what consumers are really buying is “a speculative option: the ability to begin a stream of consumption that will reveal whether or not something is worth consuming in the future”.

This is the mechanism that formed Apple’s famously loyal customer base. By creating a line of innovative devices Apple has trained its clientele to expect a certain level of satisfaction from Apple products. Appropriately, the “Cult of Mac” believes so strongly that Apple is “worth consuming in the future” that they have gone to extremes to follow its products.

Few companies in the world enjoy such devoted customers, but as Apple is slowly coming to realize, this blessing is also a curse. Apple has reached a point where it appears to have exhausted all current directions of innovation and needs time to think of new ones. The problem is, its fans aren’t used to be patient. It is this pressure to produce something as groundbreaking as its past products which has put Cook – and all of Apple – under scrutiny. Forced to release a new product, Apple can only improve its existing devices in order to buy time. But even iterating old designs diverts resources that would have otherwise gone towards innovation, and so Apple finds itself in a difficult balancing act between the customers it must satisfy and the time it needs to research and create.

Rearranging the totem pole

Meanwhile, the tech industry is changing. Last year marked the first time digital track sales (think iTunes) suffered a decrease as streaming services became more popular. Apple’s response has not been convincing. Earlier this year the company acquired Beats Electronics for, according to many analysts, no apparent reason. Critics debunked the possibility of buying Beats for its streaming service, pointing out that Apple could have achieved the same purpose by expanding iTunes or directly buying already well-established services such as Spotify. And as John Gruber of Business Insider notes, “If Apple wanted to sell expensive high-end headphones, they [didn’t] need to spend $3 billion.”

Apple’s competitors, however, definitely have been noticed. Heading the charge against Apple is Samsung, whose Galaxy S5 became popular enough earlier this year to take an (albeit small) share of the smartphone market from Apple’s iPhone. In September, Apple answered with the iPhone 6 and iPhone 6 Plus, phones that are surprisingly reminiscent of the S5 with their larger-than-traditional 4.7” and 5.5” display, respectively, and similarly-rounded edges. This apparent imitation was on top of already comparable technical specs between the two brands’ devices. Not to be outdone, Samsung released the Galaxy Alpha later that month and the Note 4 the month after, both of which have been widely praised for their increased power and improved design.

Apple’s declining share of the tablet market is even more dramatic. According to the market research company International Data Corporation, the iPad now has only 26.9% of the market as of the second quarter of this year, down from 60% two years ago. Samsung once again made ground against Apple, gaining almost 10 percentage points to clinch 17.2% of the market.

Apple is also shockingly late in entering the immerging smartwatch market. While Apple plans to release the Apple Watch early next year, it has been more than a year since Samsung introduced the Galaxy Gear, and this past spring Motorola Mobility came out with the Moto 360. Furthermore, the current leader of the smartwatch trend, from the up-and-coming company Pebble, supports both iOS and Android, making the already generously praised device even more uncomfortably competitive to both Apple and Android bids alike. And while the Apple Watch is taking its sweet time coming into market, the Pebble Steel, Pebble’s next generation of smartwatches, has already arrived.

“Think Different”

So what does all this mean for Apple? One interpretation of Apple’s recent slacking is that it is maturing as a company and in the process and is in the transitioning from creating to perfecting. “Cook and Apple are facing the harsh reality that no company can expect to continue innovating at a consistently high rate… retain control of large market shares and provide high profit margins in the face of increasing competition,” notes Wharton’s online business journal, Knowledge@Wharton. And it may be that Cook is just the right person to organize this transition with his efficient and methodical style. Apple’s purpose is no longer to get to the top but rather to stay at the top, and to do that it’s going to need to become more competitive.

Another possibility is that Apple could be on the brink of starting a whole new venture. The personal device industry has long become saturated and it’s hard to imagine many more revolutionary advances, even from Apple. If Apple wishes to continue to be “the innovator”, it has to start considering innovating in an entirely new branch of technology. In fact, Apple may already be stepping into the wearable tech with Apple Watch, and it’s easy to picture a cooler Apple counterpart to Google Glass in the near future. Alternatively, it could explore smart home appliances, as did former Apple engineers Tony Fadell and Matt Rogers with Nest.

Apple is at a crossroad, and judging from its recent change in behavior, Apple knows it. The Apple of yesterday can’t hold up for much longer. Now is the time to figure out what it will do next. It’s a daunting task, to be sure. But with the renewed leadership of CEO Tim Cook, nearly limitless resources, and the privilege of reputation and loyal fans, Apple will succeed as it always has.


Do you remember when Myspace was the king of social networks? Launched in 2003, Myspace dominated the online social sphere from 2005 until 2008 when Mark Zuckerberg’s Facebook made it obsolete. Facebook gained popularity so quickly that by August 2008 it had over one million active monthly users. Now, Facebook has more than 1.32 billion monthly active users. However, like all products, Facebook too has slowed in growth. The social network makes enormous amounts of money by selling users’ data to third parties and by selling advertising space. Although Facebook is constantly evolving, many users are now becoming concerned about their privacy. Facebook began as ad-free, but then quickly changed its privacy policies in order to gather and sell users’ data to advertisers. In contrast, Ello began as a private social network, but due to high demand is going public. Many attribute this high demand to its unconventional privacy policies that users realized were very important to them.

Ello, founded by Paul Budnitz and a team of six other artists and programmers in March of 2014, is described by many as being a sort of “anti-Facebook.” The small team consists of Budnitz, who came up with the idea for Ello, Todd Berger and Lucian Föhr, two well-known graphic designers from Colorado and three programmers, Gabe Varela, Matthew Kitt, Jay Zeschin and Justin Gitlin. The main features of Ello are complete privacy and no advertisements or data collection. Ello only gathers site usage statistics that it compiles so as to ensure that the data cannot be tracked back to any individual user. If one really wants too, Ello even allows individual users to opt of this tracking if they so wish. The layout of the website is stylistically very clean and simple, and the social network is still in beta mode which means that users can only join by invite.

Despite being so young, Ello has been drawing a lot of publicity. Awareness about Ello exploded in September 2014 due to LGBTQ+ issues that Facebook ran into. Facebook requires that users use their real names which some argued would exclude drag queens from the social network.  As a result, Ello began to see record invite requests, which reached a peak of 35,000 requests per hour in late September. Invites became so coveted that some people were able to sell their invites on Ebay for around $500. With no advertisements, complete privacy and such high initial demand, it is not difficult to see why some think Ello could trump Facebook. In order to topple the king of social networks however, Ello will need to overcome many challenges.

Some of the most stringent criticism Ello has received address the design of their website. Although it is very clean with lots of whitespace, many complain that the user interface is not very intuitive. There is only one omnibar which acts as a multi purpose tool, serving all at once as a place to post, search content, send messages and tag people in posts. While some find this useful, many others find it very confusing. Typing “@” will tag someone in a post while typing “@@” sends a private message. Additionally, since Ello is still in beta mode, there are many glitches and bugs. This is partially due to the tiny team that currently manages the social network and also partially due to interface specialization. The trick is to make a social network unique enough to offer a differentiated product that will draw users away from the big players such as Facebook and Twitter while being similar enough to make the transition smooth and effortless to these same users.

Finally, Ello lacks a business model that outlines a cohesive plan as to how they will ultimately generate cash flow and revenue. Facebook makes money by selling advertisements and data collected from its users. Since Ello’s selling point hinges on its promises to never have ads or track users, how exactly will it generate revenue?  The company was funded by an initial investment from a venture capital firm that will certainly be expecting  returns. Initial critics had thus hypothesized that the future would force Ello founders to face a stark dichotomy: bail on their values of privacy and no advertisements or go bankrupt. However, on October 23, 2014, Ello officially became a USA Public Benefit Corporation (PBC). A PBC is defined as “a special for-profit company” that operates to produce a benefit for society as a whole. As a PBC, Ello is legally obligated to take into consideration its impact on society in every decision it makes. The agreement codified a set of rules that effectively prohibits Ello from ever selling user data or displaying paid advertising. Instead, in order to generate revenue, founder Paul Budnitz claims that Ello is looking into a “freemium” model in which users would pay a small amount for extra features. This however remains very vague and would most likely not be able to deliver substantial returns. Since hosting and monitoring a social network requires vast amounts of financial and human capital resources, even just to stay afloat, Ello will have to figure out a way  to cover its costs and pay its employees.

Finally, Ello will be facing a steep upward battle against Facebook who has the first-move advantage. Facebook already dominates the social scene and has recently acquired several new companies such as Instagram, which gives it a huge amalgamation of power. It is very well refined, and offers many more features than just posting information (games, apps etc.) Facebook thus benefits from vast economies of scale as well as network effects. The point of a social network is to connect with a wide range of people and Facebook’s staggering 1.23 billion monthly users make it a powerful force to contend with. By comparison Ello has only a little over one million users, of which only 36 percent have yet posted. Of those, only 27 percent have posted more than three times. Thus, Ello, with its strong emphasis on customer privacy, has introduced an interesting new value proposition. However it is clear that it faces many challenges. Ultimately time will tell whether Ello has a chance at success or if it will fail like so many other social networks before it.




In Gasland (2010), a man lights his tap water on fire; his bitterness is mixed with images of desolate drill sites and weary faces. Though dramatic, is this scene a fair criticism of the practice? Opponents spit out the word– fracking, a word almost as ugly as the visions of uprooted landscapes and the plight of victims powerless against Big Energy yet again. For a few moments, set aside visceral reactions and quick emotion and gut-appeal. Take a glance at hydraulic fracturing, an industry slogging through the politics of energy and environmental protection.


Hydraulic fracturing has been around for a long time. It was patented in 1949 and only recently has been combined with other technologies to tap previously inaccessible shale gas. The process involves the injection of a mixture of water, proppant such as sand, and chemicals into an oil or gas well.

The fluid is pumped into the horizontal bore several thousand feet under the ground and creates fractures in the surrounding shale rock. The proppant enters these cracks, “propping” them open after the water flows back out. The chemicals do many different things, such as gelling the water on its entry and reducing friction.

The shale clays under scrutiny for natural gas previously could not be used because although they held large reserves of natural gas (the Marcellus Formation in Appalachia alone holds 84 trillion cubic feet), shale is not naturally very permeable.

Now, there are a multitude of previously inaccessible natural gas sources that can be accessed, such as black shales, coal seams, tight sandstones, and deep brine aquifers. Proponents nod to these sources and note their relatively small extraction risk compared to offshore drilling, arctic drilling, or ultra deep drilling.

In 1990, the United States produced the energy equivalent of 70 quadrillion Btu (British thermal unit, equal to 1055 joules). That number remained steady through 2006, at 69.4 quadrillion Btu. That number increased as hydraulic fracturing — combined with horizontal drilling and other new technologies –became more widespread. In 2010, 74.712 quadrillion Btu were produced; in 2011, 78.091 Btu. A large part of this increase has stemmed from natural gas production; 19 quadrillion Btu from natural gas in 2006 increased to 23.6 quadrillion Btu in 2011.

The United States has become the second largest natural gas producer in the world, just behind Russia.


In 2011, the U.S. produced 8.5 trillion cubic feet of natural gas from shale gas wells; at $4.24 per thousand cubic feet, which yielded a direct value of $36 billion. Citibank estimates that rising domestic shale oil and gas production, through reduction of oil imports and retention of “petro-dollars” in country, will reduce the current-account deficit by 1.2.-2.4% of GDP from the current value of 3%.

While other industries have spluttered in the wake of the 2008 recession, oil and gas have remained a powerhouse of employment, with the number of employees at the end of 2012 at its highest since 1987.

Through both direct (employment) and indirect (influx of people and money) economic impact, multiplier effects echo throughout local economies. Land prices surge in a state after fracking is legalized, and the high prices affect all landowners’ wealth and consumption.

Nowhere is this more apparent than in North Dakota—its per capita GDP rocketed from $34,000 to $55,000 after less than a decade of fracking, demonstrating the effect of the drilling in the Bakken formation. Apparently the North Dakotan luxury car dealers are doing a tidy business.

Gas is also the cleanest fossil fuel when burned. No sulfur, mercury, and ash are produced after combustion. No cracking or refining is required, lowering processing costs. It releases low quantities of nitrous oxides, ozones, and complex hydrocarbons, avoiding the creation of photochemical smog. Finally, it releases the lowest amounts of carbon dioxide per btu of any fossil fuel; it releases ½ of the carbon dioxide per Btu of coal and 2/3 of oil.

Finally, hydraulic fracturing has directly impacted the energy race balance between the U.S. and other countries. Between 2007 and 2011, natural gas imports in the U.S. decreased by 25%, while petroleum imports dropped 15.4% from 2005 to 2011. The Energy Information Administration predicts that by 2020, the U.S. will become a net exporter of natural gas. This, thankfully, will ease tension between the Americans and the Chinese for limited Middle Eastern natural gas resources. Countries such as Iran will also be limited in their ability to use energy diplomacy in negotiations.


Fracking does come with its cons—seismic activity, water resource risks, waste management, and extraction infrastructure, just to name a few. However, it is important to distinguish between definitive negative consequences and the assessment of risk.

Fracking’s consequences are well-defined. Each well requires 3-4 million gallons of water, 2/3 to ¾ of which is consumed and cannot be reused. Each well produces huge quantities of drill cuttings—hundreds and hundreds of tons of earth removed from thousands of feet underground to the surface.

However, many of the other environmental costs are measured in terms of risk. To be pedantic, one may define risk as the probability of the consequence multiplied with the severity of the consequence itself. Thus, though there exists risks of water quality degradation, toxic trace elements inside the earth making their way into water supplies, and even seismic activity, many of these risks only are realized through improper management of drill sites and lack of foresight regarding waste management. Like other risky fuel extraction processes such as deepwater drilling, appropriate safety processes simply have to be implemented.


Globally, we use roughly 113,900 terawatt hours of fossil energy per year, the equivalent of 6020 nuclear plants (14 times the number in operation today). As countries such as China and India raise their standards of living, their individual citizens have increasingly come to expect the amenities of the modern world.

In essence, all forms of energy production have environmental consequences. Waste-water disposal issues plague almost all energy production; for example, there exists a percentage of gasoline stations that routinely suffer leaks that leach benzene into the water supply.

Like it or not, the world needs energy. In light of this, hydraulic fracturing should be considered with a scientific, rational eye. Rather than demonizing fracking and instinctively rallying against a new technology, it should be considered a component of a complex solution to an enormous problem– the problem of supplying energy to a bright and tech-hungry world.

The author is deeply grateful for the guidance of Professors Devon Renock and Mukul Sharma of the Dartmouth Earth Sciences Department throughout his research project investigating the clay microstructures of Marcellus Shale.


On April 12th, Facebook acquired the well-known app company Instagram for approximately $1 billion. The pricing may be bewildering to some; after all, how could a company with just one free smartphone app and 13 employees of negligible value sell for the same price as a small island nation? The app’s functions, while clever, are nothing that Facebook couldn’t recreate for a small fraction of the acquisition cost — Instagram simply allows users to take photos from their smartphones and applies various digital filters in order to give them a vintage feel, modifying them in the style of old Polaroid cameras. The result is photos with a distinct vintage feel to them that can then be shared to various social networks. The app was a breath of fresh air for users (read: hipsters) who wanted that old- school feel to their pictures without having to lug around a bulky analog camera. From a functionality standpoint, that’s all the company offers. Any decent programming team could produce (and have produced) almost identical apps.

However, this train of thought misses the point. The lack of proprietary value in the app belies the true value of this deal to Facebook: the network of people that Instagram can bring to the social networking giant.

Instagram has over 30 million registered accounts, representing a vast network of mobile users that represents huge potential for a social network like Facebook. This number should continue to soar as Instagram only recently began expanding beyond iOS devices (the Apple lineup of mobile electronics including the iPhone, iPad, and iPod Touch) to the most popular smartphone operating system in the world, Android. The app was downloaded over one million times in the first 12 hours it appeared on the iOS alternative, representing the eager user base for the app. Despite the staggering number of users, Instagram has made no revenue to date, leading many to say that is has no business model at all. In other words, with nothing proprietary, no real future hope for revenue, and only popularity and polish to its name, Instagram is worth $1 billion for its loyal mobile users alone.

In Facebook’s eyes, what users actually do with the app is irrelevant, so long as it gives them access to users they could not reach before. Facebook’s own business model is dependent on getting as many users as possible using the site as much as possible, and one place it has not been able to do so is in the mobile app area. Facebook not only gets revenue through users clicking on ads of relevance to them, but by analyzing the preferences of its users, it can give each user the ad they are most likely to click on–thereby maximizing ad revenue.

Instagram not only adds another way to profile users, but it also adds a brand new network to Facebook’s massive web. Facebook has a mobile app for its social network, but amidst poor reviews has not found a great increase in traffic from it. Instagram’s users are exclusively mobile, and the social network simply wants to change that network of Instagram users into new mobile Facebook users. In a statement regarding the purchase, the company emphasized the importance of mobile usage, calling it “critical to maintaining growth and engagement over the long term”. Ultimately, this is not a purchase of an app, or some employees, but an acquisition of users, which is well worth it to a modern internet company like Facebook.

The Instagram acquisition represents an industry-wide trend of buying companies to capture their network despite their apparent lack of a business plan. Companies like Groupon, Pandora, LinkedIn, and Yelp all attract investments valuing them at hundreds of millions, largely for the users they bring to their investor. Each of these companies stakes its future on all its users having intrinsic monetary value, and assumes that they will inevitably make money off of them through advertising. With the power of advertising that tracks users’ preferences, capturing networks may end up being the key to capturing the riches on the Internet…or it could end up being fool’s gold. That user base may represent incredible potential profit, but it seems increasingly dubious that the valuation of these companies is reflected in their sky-high stock prices.

Such was the problem of the Web 1.0 bubble, where popular companies with no real earnings potential were gobbled up by investors and failed spectacularly. Could we be seeing the new Web 2.0 bubble, a severe overvaluation of the networks of companies doomed to failure? Or is Facebook slowly consolidating users to the point where they will be a financial success until the end of time?

Either way, we are entering an era where the people that follow a company are far more valuable than the company itself.