Friday, June 30, 2017
Google’s Elite Hacker SWAT Team vs. Everyone - Magazine Fortune
Jun 23, 2017
http://fortune.com/2017/06/23/google-project-zero-hacker-swat-team/
Brash. Controversial. A guard against rising digital threats around the globe. Google’s Project Zero is securing the Internet on its own terms. Is that a problem?
One Friday Afternoon in February, Tavis Ormandy, a virtuosic security researcher with a brown buzz cut and an uneasy smile, was performing some routine “fuzzing,” a common code-testing technique that blasts software with random data to expose faults, at his desk at Google (GOOGL, -0.87%) headquarters in Mountain View, Calif. The process was going as expected when he spotted something amiss in the data set. Weird, he thought. This isn’t typical corrupted data. Instead of the expected output, he saw bizarrely configured anomalies—strange chunks of memory strewn about. So he dug deeper.
The Federal Government’s Phone Subsidy Program Is Apparently Rife With Fraud
After assembling enough information, Ormandy called his fellow security researchers into a huddle to share what he had found. The Google team, which goes by the name Project Zero, soon realized what it was looking at: a wide-ranging data leak spouting from a San Francisco company called Cloudflare. Most of the time, Cloudflare’s content-delivery network processes roughly a tenth of the world’s Internet traffic without a hitch. But Ormandy had discovered that the company’s servers were splattering people’s private data across the web. The information had been leaking for months.
Ormandy didn’t know anyone at Cloudflare, and he was hesitant to cold-call its generic support line so late in the day ahead of a three-day weekend. So he did the next best thing he could think of. Ormandy took to Twitter to appeal to the tens of thousands of people who follow him there.
Could someone from Cloudflare security urgently contact me
The time stamp was 5:11 p.m. Pacific Time.
Ormandy did not bother to alert the company’s Twitter account by tagging its name with an “@” symbol. He didn’t need to. Such is his reputation among the zealous community of information-security professionals that within 15 minutes of Ormandy’s pressing “Send,” everyone in the world who needed to know—and plenty who didn’t—would see the note.
.
At 1:26 a.m. local time John Graham-Cumming’s phone, plugged into an outlet by his bedside in London, buzzed him awake. The chief technology officer of Cloudflare rubbed his eyes and reached to pick up the rumbling handset. Missed call. A colleague—one of the few whom Graham-Cumming had white-listed to reach him after midnight—had called. The CTO fired off a text message asking what was up.
His colleague responded immediately.
very serious security issue
Graham-Cumming sat up, alarmed, and replied.
I will get online
The CTO rose from bed, went downstairs to the basement, and grabbed the emergency bag—charger, headphones, extra batteries—that he had stowed for such an occasion. He booted up his laptop computer and quickly joined a Google Hangout with his colleagues at Cloudflare’s California headquarters.
The security team briefed him on the unfolding situation. Google’s Project Zero team had found a bug in Cloudflare’s infrastructure—a bad one. The servers that help run more than 6 million customer websites, including those of the FBI, Nasdaq, and Reddit, had sprung a data leak. Anyone could access a Cloudflare-supported site and retrieve in certain circumstances the intimate details—authentication tokens, cookies, private messages—of users of another site on its network, among them Uber, 1Password, OKCupid, and Fitbit.
Photos, Graham-Cumming: Courtesy of Cloudflare: Ormandy: Courtesy of Black Hat
The information was hidden in plain sight. Worse, search engines and other web crawlers had been storing the leaked data in their caches for months. Plugging the leak would not fully solve the problem.
“I liken it to an oil spill,” Graham-Cumming says. “It’s easy to deal with a hole in the side of a tanker, but then you’ve got a lot of seabeds that need to be cleaned up.”
So Cloudflare’s engineers got to work. Security chief Marc Rogers, who in his spare time serves as a consultant for the USA Network hacker drama Mr. Robot, led the triage effort. In less than an hour the team pushed out an initial mitigating update that plugged the leak worldwide. After several hours the technicians successfully rolled back functions that had contributed to the error. Almost seven hours after Ormandy fired off his tweet, Cloudflare’s engineers managed to enlist the major search engines—Google, Microsoft (MSFT, +0.82%), Yahoo—to clear their historical web page caches.
It was the beginning of a very long weekend. Cloudflare engineers spent the rest of it evaluating how much and what kind of data had leaked as well as how far the mess had spilled.
Google’s Project Zero team was initially impressed with the rapid response of Cloudflare, which has a reputation for transparency when it comes to security matters. But the relationship began to fray as the teams negotiated when they would publicly reveal what had transpired. The companies tentatively agreed to make an announcement as early as Tuesday, Feb. 21. As the day waned, Cloudflare decided it needed more time for cleanup. Tuesday became Wednesday. Wednesday became Thursday. Google put its foot down: Thursday afternoon would be the day the companies published details of the leak, which Ormandy dubbed “Cloudbleed,” whether or not Cloudflare had completed its assessment and ensured that the leaked data was clear from online caches.
Both advisories went up on Feb. 23. A weeklong Internet panic ensued.
RELATED
AT&T Celebrates the Launch of DIRECTV NOW
FORTUNE 500
AT&T Adding Dozens of Local Stations to Internet TV App
You don’t have to be a member of Google’s Project Zero to know that security crises are on the rise around the globe. Every company has become a tech company—and so hacks are increasingly becoming commonplace, draining corporate bank accounts, spying on individuals, and interfering in elections. The headlines are sobering: More than 1 billion Yahoo accounts compromised. Tens of millions of dollars stolen through the SWIFT financial network. Countless private emails from the Democratic National Committee exposed ahead of the 2016 U.S. presidential election. (For more on how business is responding, read “Hacked: How Business Is Fighting Back Against the Explosion of Cybercrime.”)
U.S. companies and government agencies reported 40% more breaches in 2016 than in 2015, and that’s a conservative estimate, according to the Identity Theft Resource Center. At the same time, the average cost of a data breach now runs organizations $3.6 million, according to an IBM-sponsored study conducted by the Ponemon Institute, a research group.
Whether the result of a programmer’s error or hackers working for a nation-state, data leaks are the new norm. So executives are coming to terms with the idea that it might be more economical to nip coding issues in the bud before they lead to bigger—and messier—problems down the road.
.
But it’s not that simple. Too many organizations either don’t prioritize security or view it as an impediment to meeting product development and delivery deadlines. According to Veracode, an application-security firm acquired by CA Technologies earlier this year, 83% of the 500 IT managers it surveyed admitted that they had released code before testing for bugs or resolving security issues. At the same time, the security industry faces a talent shortage. Cisco (CSCO, -0.13%) estimates that there are 1 million unfilled security jobs worldwide, and Symantec predicts that will increase to 1.5 million by 2019. Some estimates believe that figure will grow to 3.5 million by 2021.
Even if a company has the funds, initiative, and cachet to support a proper security staff, it’s not immune to shipping flawed code. The best quality-assurance programs and agile development practices can’t catch every bug.
So many companies, including Microsoft and Apple (AAPL, +0.28%), have internal security-research teams that investigate their own software. But few have teams that focus on the software made by other companies. That is what makes Google so unusual. To Ormandy and the dozen or so ace computer crackers that make up Google’s Project Zero, there are no boundaries to their jurisdiction—anything that touches the Internet is fair game. Policing cyberspace isn’t just good for humanity. It’s good for business too.
Google officially formed Project Zero in 2014, but the group’s origins stretch back another five years. It often takes an emergency to drive most companies to take security seriously. For Google, that moment was Operation Aurora.
In 2009, a cyberespionage group associated with the Chinese government hacked Google and a number of other tech titans, breaching their servers, stealing their intellectual property, and attempting to spy on their users. The pillaging outraged Google’s top executives—enough so that the company eventually exited China, the world’s biggest market, over the affair.
.
The event particularly bothered Google co-founder Sergey Brin. Computer-forensics firms and investigators determined that the company had been hacked not through any fault of Google’s own software, but via an unpatched flaw in Microsoft Internet Explorer 6. Why, he wondered, should Google’s security depend on other companies’ products?
Intel Will Face EU Antitrust Watchdogs Again This Fall
In the months that followed, Google began to get more aggressive in demanding that rivals fix flaws in their software’s code. The battles between Google and its peers soon became the stuff of legend. At the center of several of these spats was none other than bug hunter Tavis Ormandy, known for his smashmouth approach to getting flaws fixed. (Ormandy declined to be interviewed for this story.)
For example, not long after Operation Aurora became public, Ormandy disclosed a flaw he found months earlier in Microsoft’s Windows operating system that could allow attackers to commandeer people’s PCs. After waiting seven months for the company to issue a patch, he took matters into his own hands. In January 2010, Ormandy posted details of the flaw on a “full disclosure” mailing list where security researchers notify peers of new vulnerabilities and attack methods. His thinking: If Microsoft wasn’t going to address the problem in a timely manner, people should at least know about the issue so they can develop their own solutions. A few months later, he did the same for a bug affecting Oracle’s Java software as well as for another big Windows flaw, the latter just five days after reporting it to Microsoft.
Critics of the practice censured Ormandy’s behavior, claiming it damaged people’s security. (Apple, Microsoft, and Oracle would not comment for this story.) In a corporate blog post, two Verizon (VZ, +0.56%) security specialists called researchers who choose the full disclosure route “narcissistic vulnerability pimps.” Ormandy ignored the flak. In 2013 he again chose to make a Windows bug public before Microsoft developed a fix for it. Without the threat of a researcher going public, he reasoned, companies have little pressure to fix a flaw in a timely manner. They can sit on bugs indefinitely, putting everyone at risk.
Google quietly began to formalize what became Project Zero in 2014. (The name alludes to “zero-day” vulnerabilities, the term security pros used to describe previously unknown security holes, ones that companies have had no time, or zero days, to prepare for.) The company established a set of protocols and allowed Chris Evans (no relation to Captain America), former head of Google Chrome security, to take the helm. Evans in turn began recruiting Googlers and others to the team.
Security: A Glossary
Bug: An unexpected error in computer code. The ones with security implications are called “vulnerabilities.”
Zero Day: A vulnerability that people and companies have had no time—“zero days”—to fix.
Exploit: A computer program that a hacker crafts to take advantage of a known vulnerability.
He signed on Ian Beer, a British-born security researcher based in Switzerland, who had demonstrated a penchant for sussing out Apple’s coding errors. He brought on Ormandy, a British bruiser known for his highly publicized skirmishes with Microsoft. Evans enlisted Ben Hawkes, a New Zealander known for stomping out Adobe Flash and Microsoft Office bugs. And he invited George Hotz, a precocious teenager who had earned $150,000 after busting open the Google Chrome browser in a hacking competition earlier that year, to be an intern. (Current members of Project Zero declined multiple requests to be interviewed about their work for this story.)
The first sign that Project Zero had arrived came in April 2014 when Apple credited a Google researcher in a brief note for discovering a flaw that would allow a hacker to take control of software running Apple’s Safari web browser. The note thanked “Ian Beer of Google Project Zero.”
On Twitter, the information-security community openly wondered about the secretive group. “What is Google Project Zero?” asked Dan Guido, cofounder and CEO of the New York–based cybersecurity consultancy Trail of Bits, in a tweet posted April 24, 2014. “Employee of mysterious ‘Google Project Zero’ thanked in Apple security update changelog,” noted Chris Soghoian, then the chief technologist at the American Civil Liberties Union.
What is Google Project Zero? http://t.co/1Dv9Kgnt8I
- Dan Guido (@dguido) April 24, 2014
More credits soon appeared. In May, Apple credited the discovery of several bugs in its OS X operating system to Beer. A month later, Microsoft patched a bug that made it possible to defeat its malware protection, noting the help of “Tavis Ormandy of Google Project Zero” in an advisory.
Employee of mysterious 'Google Project Zero' thanked in Apple security update changelog. (Via @dguido) http://t.co/zFjpgi5Bfq
- Christopher Soghoian (@csoghoian) April 24, 2014
RELATED
Microsoft Corp. Launches Windows 10 In Japan
FORTUNE 500
India Wants a Discount on Microsoft Windows
By then, the team had generated considerable buzz among those who track security issues. Evans finally made its presence officially known in a blog post on the company’s website. “You should be able to use the web without fear that a criminal or state-sponsored actor is exploiting software bugs to infect your computer, steal secrets or monitor your communications,” he wrote, citing recent examples of spies targeting businesses and human-rights activists as unconscionable abuses. “This needs to stop.”
Evans left the team a year later to join Tesla and now serves as an adviser with the bug bounty startup HackerOne. (Hawkes now leads Project Zero.) Today Evans is more circumspect in describing the group’s origins. “The foundations for Project Zero were laid across years of thoughtful lunchtime conversations and years of observing the evolution of attacks,” he says. “We wanted to create jobs focused exclusively on top-tier offensive research, to attract the best in the world to the public research space.”
It’s a more difficult challenge than it seems. Private money soaks up many of the world’s best hackers, luring them to work behind closed doors, where governments and other entities, through brokers, will pay top dollar for their findings. When that research doesn’t see the light of day, Evans says, people suffer.
In the three years since Google’s Project Zero officially came together, the elite hacker squad has built a reputation for being among the most effective computer bug exterminators on the planet. Although an ordinary consumer is unlikely to recognize any one of their names—James Forshaw, Natalie Silvanovich, Gal Beniamini—the world owes them a debt of gratitude for sealing up the devices and services that run our digital lives. The team is responsible for a litany of improvements in other companies’ products, including finding and helping to patch more than a thousand security holes in operating systems, antivirus software, password managers, open-source code libraries, and other software. Project Zero has published more than 70 blog posts about its work to date, some of the best public security research available on the web today.
The team’s work indirectly benefits Google’s primary business: online advertising. Protecting Internet users from threats means protecting the company’s ability to serve those users ads. Project Zero’s effort to hold vendors’ feet to the fire also forces them to fix bugs that cause Google products to crash.
“This is a dorky name for it, but it’s like a sheepdog,” says Dino Dai Zovi, a cybersecurity entrepreneur, noted Apple hacker, and former head of mobile security at Square. “A sheepdog is not a wolf. It’s kind of benevolent, but it still chases the sheep into line to get them back into the pen.”
In April three members of Project Zero traveled to Miami to attend the Infiltrate security conference, a gathering focused entirely on the offensive side of hacking.
In a city built on suntans and sports cars, the computing cohort look somewhat out of place. Hawkes, Ormandy, and Thomas Dullien, a German security researcher and member of the Project Zero team who is better known by the hacker moniker “Halvar Flake,” gather on the lawn of the swanky Fontainebleau hotel to sip mojitos under the rustling palm trees. Seated at a table with a handful of other conference attendees, the Googlers chat about current affairs, favorite sci-fi tales, and how shameful it is that more is not done to preserve hacker history.
At one point Ormandy swipes a pair of gaudy Versace sunglasses left on a table by Morgan Marquis-Boire, a former Google employee, well-known malware researcher, and current head of security at eBay founder Pierre Omidyar’s media venture First Look Media. The Florida sun has subsided, but Ormandy places the shades over his blue eyes and mugs. He looks ridiculous.
Photos, Google: Michael Short—Bloomberg via Getty Images; Tesla: Courtesy of Tesla; Apple: Courtesy of Apple Inc.; Cloudflare: Amriphoto—Getty Images; Pentagon: Library of Congress/Corbis/VCG via Getty Images; United: Courtesy of United; Netgear: Business Wire; Oracle: Kimihiro Hoshino—AFP/Getty Images
Infiltrate organizer Dave Aitel, an ex-NSA hacker who runs Immunity, an offensive hacking shop, whips out his phone to take a photo. His subject contorts his hands into a heavy metal fan’s “sign of the horns.” Behold Tavis Ormandy: online, a quarrelsome critic who suffers no fools; offline, a genial geek who happily horses around.
RELATED
FORTUNE 500
Amazon Plans to Join Red Hat and GE in Boston’s Hottest Tech Hub
“People give you a lot of shit, Tavis,” Aitel says, referring to the frustrating battles Ormandy must endure while prodding vendors to fix their code. “You know, you don’t have to deal with that.” With an impish grin, Aitel proceeds with a facetious attempt to persuade Ormandy to join the “dark side” of hacking—researchers who find bugs and then sell them for a profit rather than report them to the affected companies, rendering the bugs kaput.
Ormandy shrugs off Aitel’s offer, laughs, then sets the glasses back on the table. He may be a troublemaker, but his aims are pure. (Ormandy allowed this reporter to hang around, but later declined to comment.)
Despite its hard-edged reputation, Project Zero has had to become more flexible as its high-minded ideals collide with the complexities of the real world. The team initially kept to a strict 90-day disclosure deadline, or just seven days for “actively exploited” bugs, but several instances of disclosure shortly before companies had scheduled to release updates, such as Microsoft and its recurring “Patch Tuesday,” caused the group a lot of backlash. (It has since added a 14-day extension after the 90 days in the event that a vendor has a patch prepared.)
Project Zero has some of the most explicit disclosure policies in the technology industry, says Katie Moussouris, who helped create the disclosure policy at Microsoft and now runs her own bug-bounty consulting firm called Luta Security. That’s a good thing, she says. Many companies fail to establish guidelines on how to report bugs or lack policies on how or when a researcher should expect a bug to go public. Some organizations provide companies with even less time to fix their software. Cert CC, a group run out of Carnegie Mellon University, has a stated 45-day policy—half that of Project Zero, though the group allows for more leeway on individual bases.
Luta Security CEO Katie Moussouis. Photo: Joe Pugliese—August
Bug Baroness and Luta Security CEO Katie Moussouris explains the economy of exploits:
There are two markets for bugs: offense and defense. The former is made of nation-states, organized crime groups, and other attackers. The latter consists of bug-bounty programs and companies that sell security products. The offense market pays higher prices and doesn’t have a ceiling. They’re not just buying a vulnerability or an exploit; they’re buying the ability to use it without being detected. They’re buying silence. The defense market can’t pay as much. It’s not like vendors are going to compensate their top developers a million dollars. Even though major companies’ code quality is improving, complexity continues to increase. That means more bugs. What security researchers do with a particular bug may depend on their financial needs, their dispositions about a piece of software or vendor, and their own personal risk. It’s not black-hat sellers vs. white hat.
And Project Zero is as quick to praise a company’s actions to fix a bug as it is to criticize a sluggish response. Earlier this year, Ormandy tweeted that he and colleague Natalie Silvanovich had “discovered the worst windows remote code exec in recent memory,” meaning a way to take over a Windows-based system from afar. “This is crazy bad,” he wrote. The two worked with Microsoft to patch the bug. “Still blown away at how quickly @msftsecurity responded to protect users, can’t give enough kudos. Amazing,” he wrote in a follow-up tweet. Apparently, it’s never too late to improve.
Technology companies may cringe at Project Zero’s audacity, but they should take comfort in the fact that its hackers are willing to resist the urges that drive some researchers to put their findings up for sale. In the years since hacking became professionalized, markets have sprouted for the bugs that Project Zero discloses. Governments, intelligence services, criminals—everyone wants them for themselves and is willing to pay top dollar. The growing adoption of bug bounty programs at software companies is a slight tip of the scale in the other direction, offering compensation to researchers for their time, effort, and expertise. But the payment on the bounty side will never meet the compensation one can get from murkier markets.
Still blown away at how quickly @msftsecurity responded to protect users, can't give enough kudos. Amazing.
- Tavis Ormandy (@taviso) May 9, 2017
“Whatever Google’s bug bounty rewards are, the Chinese government will pay more for it,” says Bruce Schneier, a well-known security guru and executive at IBM.
Back at the Fontainebleau, Dullien tells me he is amazed at how in-demand the skills of hackers have become. What was once a hobby done in dark basements is now a profession at home in the halls of government.
“This was all a ’90s subculture, like hip-hop or break dancing or skateboarding or graffiti,” he says. “It just so happened that the military found it useful.”
According to Matthew Prince, CEO and cofounder of Cloudflare, the leak uncovered by Google’s top bug hunters initially cost his company about a month of growth. (The setback was temporary, he says: Cloudflare’s transparency during the process helped it attract new business.)
If he’s at all sour about the experience, Prince doesn’t let it show. He knows what it’s like to be targeted by truly malicious hackers. A few years ago a hacker group called “UGNazi” broke into Prince’s personal Gmail account, used it to gain control over his corporate email account, then hijacked Cloudflare’s infrastructure. The hooligans could have done significant damage. Instead, they decided to redirect 4chan.org, a common hacker hangout, to their personal Twitter profile for publicity.
Prince still regrets not informing his customers of the full extent of the Cloudbleed issue before Google and Cloudflare published their initial findings. He wishes his company had alerted customers before they read about the leak in news reports. Even so, Prince believes in retrospect that the Project Zero team was right on the timing of when to go live with the disclosure. To his knowledge, no one has uncovered any significant damages related to the leak in the time since. No passwords, credit card numbers, or health records have turned up, despite their initial fears.
Prince says Cloudflare has put new controls in place to prevent such an incident from happening again. The company began a review of all of its code and hired outside testers to do the same. It also instituted a more sophisticated system that identifies common software crashes, which tend to indicate the presence of bugs.
“I have many more gray hairs and will likely live a year less than before as a result of those 14 days,” Prince says about the discovery and the aftermath of the leak. “Thank God it was Tavis and that team who found it and not some crazy hacker.”
Of course, Prince will never be able to rule out the possibility that another person or organization has copies of the leaked data. And that’s just Project Zero’s point. For every one of its team members, there are countless other researchers working in private with less noble goals in mind. It’s the devil you know—or the devil you don’t.A version of this article appears in the July 1, 2017 issue of Fortune.
Why the U.S. Is Still Richer Than Every Other Large Country
Martin S. Feldstein
APRIL 20, 2017
Each year, the United States produces more per person than most other advanced economies. In 2015 real GDP per capita was $56,000 in the United States. The real GDP per capita in that same year was only $47,000 in Germany, $41,000 in France and the United Kingdom, and just $36,000 in Italy, adjusting for purchasing power.
In short, the U.S. remains richer than its peers. But why?
I can think of 10 features that distinguish America from other industrial economies, which I outline in a recent essay for the National Bureau of Economic Research, from which this article is adapted.
An entrepreneurial culture. Individuals in the U.S. demonstrate a desire to start businesses and grow them, as well as a willingness to take risks. There is less penalty in U.S. culture for failing and starting again. Even students who have gone to college or a business school show this entrepreneurial desire, and it is self-reinforcing: Silicon Valley successes like Facebook inspire further entrepreneurship.
A financial system that supports entrepreneurship. The U.S. has a more developed system of equity finance than the countries of Europe, including angel investors willing to finance startups and a very active venture capital market that helps finance the growth of those firms. We also have a decentralized banking system, including more than 7,000 small banks, that provides loans to entrepreneurs.
World-class research universities. U.S. universities produce much of the basic research that drives high-tech entrepreneurship. Faculty members and doctoral graduates often spend time with nearby startups, and the culture of both the universities and the businesses encourage this overlap. Top research universities attract talented students from around the world, many of whom end up remaining in the United States.
Labor markets that generally link workers and jobs unimpeded by large trade unions, state-owned enterprises, or excessively restrictive labor regulations. Less than 7% of the private sector U.S. labor force is unionized, and there are virtually no state-owned enterprises. While the U.S. does regulate working conditions and hiring, the rules are much less onerous than in Europe. As a result, workers have a better chance of finding the right job, firms find it easier to innovate, and new firms find it easier to get started.
A growing population, including from immigration. America’s growing population means a younger and therefore more flexible and trainable workforce. Although there are restrictions on immigration to the United States, there are also special rules that provide access to the U.S. economy and a path to citizenship (green cards), based on individual talent and industrial sponsorship. A separate “green card lottery” provides a way for eager people to come to the United States. The country’s ability to attract immigrants has been an important reason for its prosperity.
A culture (and a tax system) that encourages hard work and long hours. The average employee in the United States works 1,800 hours per year, substantially more than the 1,500 hours worked in France and the 1,400 hours worked in Germany (though not as much as the 2,200+ in Hong Kong, Singapore, and South Korea). In general, working longer means producing more, which means higher real incomes.
A supply of energy that makes North America energy independent. Natural gas fracking, in particular, has provided U.S. businesses with plentiful and relatively inexpensive energy.
A favorable regulatory environment. Although U.S. regulations are far from perfect, they are less burdensome on businesses than the regulations imposed by European countries and the European Union.
A smaller size of government than in other industrial countries. According to the OECD, outlays of the U.S. government at the federal, state, and local levels totaled 38% of GDP, while the corresponding figure was 44% in Germany, 51% in Italy, and 57% in France. The higher level of government spending in other countries implies not only a higher share of income taken in taxes but also higher transfer payments that reduce incentives to work. It’s no surprise that Americans work a lot; they have extra incentive to do so.
A decentralized political system in which states compete. Competition among states encourages entrepreneurship and work, and states compete for businesses and for individual residents with their legal rules and tax regimes. Some states have no income taxes and have labor laws that limit unionization. States provide high-quality universities with low tuition for in-state students. They compete in their legal liability rules, too. The legal systems attract both new entrepreneurs and large corporations. The United States is perhaps unique among high-income nations in its degree of political decentralization.
Will America maintain these advantages? In his 1942 book, Socialism, Capitalism, and Democracy, Joseph Schumpeter warned that capitalism would decline and fail because the political and intellectual environment needed for capitalism to flourish would be undermined by the success of capitalism and by the critique of intellectuals. He argued that popularly elected social democratic parties would create a welfare state that would restrict entrepreneurship.
Although Schumpeter’s book was published more than 20 years after he had moved from Europe to the United States, his warning seems more appropriate to Europe today than to the United States. The welfare state has grown in the United States, but much less than it has grown in Europe. And the intellectual climate in the United States is much more supportive of capitalism.
If Schumpeter were with us today, he might point to the growth of the social democratic parties in Europe and the resulting expansion of the welfare state as reasons why the industrial countries of Europe have not enjoyed the same robust economic growth that has prevailed in the United States.
Martin S. Feldstein is the George F. Baker Professor of Economics at Harvard University and President Emeritus of the National Bureau of Economic Research.
Thursday, June 29, 2017
Cryptography: True Random Number Generator
Entropy Sources
•A true random number generator (TRNG) uses a non-deterministic source to produce randomness.
•Most operate by measuring unpredictable natural processes, such as pulse detectors of ionizing radiation events, gas discharge tubes, and leaky capacitors.
•Intel has developed a commercially available chip that samples thermal noise by amplifying the voltage measured across undriven resistors.
•LavaRnd is an open source project for creating truly random numbers using inexpensive cameras, open source code, and inexpensive hardware.
•The system uses a saturated CCD in a light-tight can as a chaotic source to produce the seed.
•Software processes the result into truly random numbers in a variety of formats.
RFC 4086 lists the following possible sources of randomness that, with care, easily can be used on a computer to generate truly random sequences.
•Sound/video input: Many computers are built with inputs that digitize some real-world analog source, such as sound from a microphone or video input from a camera.
–The “input” from a sound digitizer with no source plugged in or from a camera with the lens cap on is essentially thermal noise. If the system has enough gain to detect anything, such input can provide reasonably high-quality random bits.
•Disk drives: Disk drives have small random fluctuations in their rotational speed due to chaotic air turbulence. The addition of low-level disk seek time instrumentation produces a series of measurements that contain this randomness.
–Such data is usually highly correlated, so significant processing is needed. Nevertheless, experimentation a decade ago showed that, with such processing, even slow disk drives on the slower computers of that day could easily produce 100 bits a minute or more of excellent random data.
•A typical stream cipher encrypts plaintext one byte at a time, although a stream cipher may be designed to operate on one bit at a time or on units larger than a byte at a time.
•
•Figure 7.5 is a representative diagram of stream cipher structure.
•In this structure, a key is an input to a pseudorandom bit generator that produces a stream of 8-bit numbers that are apparently random.
•
•The output of the generator, called a keystream, is combined one byte at a time with the plaintext stream using the bit- wise exclusive-OR (XOR) operation.
•The stream cipher is similar to the one-time pad.
•The difference is that a one-time pad uses a genuine random number stream, whereas a stream cipher uses a pseudorandom number stream.
Following are important design considerations for a stream cipher.
1.The encryption sequence should have a large period. A pseudorandom num- ber generator uses a function that produces a deterministic stream of bits that eventually repeats. The longer the period of repeat the more difficult it will be to do cryptanalysis. This is essentially the same consideration that was discussed with reference to the Vigenère cipher, namely that the longer the keyword the more difficult the cryptanalysis.
●
2.The keystream should approximate the properties of a true random number stream as close as possible. For example, there should be an approximately equal number of 1s and 0s. If the keystream is treated as a stream of bytes, then all of the 256 possible byte values should appear approximately equally often. The more random-appearing the keystream is, the more randomized the ciphertext is, making cryptanalysis more difficult.
●
3.The output of the pseudorandom number generator is conditioned on the value of the input key. To guard against brute-force attacks, the key needs to be sufficiently long. The same considerations that apply to block ciphers are valid here. Thus, with current technology, a key length of at least 128 bits is desirable.
Stream Ciphers - Advantages
•With a properly designed pseudorandom number generator, a stream cipher can be as secure as a block cipher of comparable key length.
•A potential advantage of a stream cipher is that stream ciphers that do not use block ciphers as a building block are typically faster and use far less code than do block ciphers.
•
•For applications that require encryption/decryption of a stream of data, such as over a data communications channel or a browser/Web link, a stream cipher might be the better alternative.
•For applications that deal with blocks of data, such as file transfer, e-mail, and database, block ciphers may be more appropriate.
•However, either type of cipher can be used in virtually any application.
Monday, June 26, 2017
In Support of the Invasion
‘A landing against organised and highly trained opposition is probably the most difficult undertaking which military forces are called upon to face.’ General of the Army George Marshall
‘Confusion now hath made his masterpiece.’
Shakespeare, Macbeth
Whatever its outcome, the invasion of northern Europe in June 1944 was bound to have decisive importance. Should it fail, Allied losses would almost certainly be so great that Adolf Hitler need not fear for his western territories for a considerable time to come. If it succeeded, the end of the Third Reich would be in sight. Those were the issues, and they were important enough to warrant the preparation of the most elaborate programme of radio counter measures ever devised. If the invasion was to achieve tactical surprise, the first priority was to destroy as many as possible of the radar stations erected along the coasts of France and Belgium as part of the formidable German ‘West Wall’. Along the northern shores of France and Belgium, no fewer than ninety-two radar sites
kept watch out to sea. These sites operated the menagerie of German ground radars – the long range Mammut and Wassermann sets, the Giant and the standard Würzburg, the Freya and the naval Seetakt. For the invaders, that multiplicity of ‘radar eyes’ made the jamming problem more difficult than anything previously attempted. Yet the problem of deception promised to be even more formidable.
Instruments of Darkness
The History of Electronic Warfare, 1939–1945
A Greenhill Book
First published in 1967 by William Kimber & Co., London
Expanded edition published in 1977 by
Madonald and Jane’s Publishers, London
Corporate Development and Strategy Mergers and Acquisitions, New Ventures, and Brand Extensions
Mergers and acquisitions, new ventures, and brand extensions—all aspects of corporate development—are unquestionably strategic business functions. By all the traditional criteria for distinguishing between strategic and tactical decisions, corporate development issues qualify as strategic.
The commitments in question are large, they involve the overall direction of the enterprise, and they have long-term consequences. The most common method for evaluating alternative courses of action
in these areas is a business case analysis consisting of detailed projections of future distributable cash flows discounted back to the present. But discounted cash flow, as we have argued in chapter 16, is by itself a critically flawed tool for making decisions of this sort. The values calculated to justify initiatives depend on projections, into the distant future, of growth rates, profit margins, costs of capital, and other crucial yet highly uncertain variables. Also, a typical discounted cash flow analysis rests on a number of critical assumptions about the nature and intensity of future competition that are
rarely explicit and generally untested. The strategic framework we have developed in this book, especially the view that the most important determinant of strategy is whether an incumbent firm benefits from competitive advantages, applies directly to issues of corporate development. In fact, the utility of this approach in clarifying decision making in this area is an important test of its worth. At a minimum, clarifying the competitive environment in which new initiatives will succeed or fail should provide an essential check on whether the conclusions of a discounted cash flow–based business case are reasonable.
COMPETITION DEMYSTIFIED
A Radically Simplified Approach to Business Strategy
BRUCE GREENWALD AND JUDD KAHN
What Can E-waste Cost?
Electronics may contain hazardous materials that are harmful if they end up in landfills. It is our responsibility as an electronics retailer to make sure the e-waste we collect from our customers, or utilize in our day to day operations, go to a recycler and not to a landfill.
Promotes responsible environmental stewardship by requiring all recyclers retained to comply with standards regarding the reuse, refurbishment or recycling of products collected through our programs and the disposal of waste generated from the recycling process.
While it is important to do the right thing for the environment, recycling e-waste is also the law. There are environmental laws regulating the handling and disposal of electronics.
What's the risk?
The risk to the company for not following these regulations is enormous. Other major retailers have incurred major financial penalties for improperly disposing of material, such as e-waste.
Target - $22M
Walmart - $25M
Comcast - $25M
AT&T - $28M
Lowe's - $18M
CVS - $13M
Along with the financial risk, there is a reputational risk. Would you shop at an electronics retailer if you knew they were throwing their own electronics into the landfill? We need to be the trusted expert on electronics from selling to end-of-life.
Saturday, June 24, 2017
FTP bounce
Which of the following is indicative of an FTP bounce?
A.
Arbitrary IP address
Arbitrary IP address
B.
Reverse DNS lookups
Reverse DNS lookups
C.
Same Port Number
Same Port Number
D.
File Transfer Success
File Transfer Success
How it works is the attacker sends an FTP PORT command to an FTP server that contains the IP address and port number of the machine (and service) being attacked. This file contains commands relevant to the service being attacked (SMTP, NNTP, and so on), instructing a third party to connect to the service. Hence, rather than connecting directly to the machine, it makes tracking down the perpetrator difficult, and can circumvent network addressbased access restrictions. As an example, suppose that a client uploads a file containing SMTP commands to an FTP server. Then, using an appropriate PORT command, the client instructs the server to open a connection to a third machine's SMTP port. Finally, the client instructs the server to transfer the uploaded file containing SMTP commands to the third machine. This may allow the client to forge mail on the third machine without making a direct connection.
Another aspect of FTP that opens the system up to security problems is the third-party mecha- nism included in the FTP specification known as proxy FTP. It is used to allow an FTP client to have the server transfer the files to a third computer, which can expedite file transfers over slow con- nections. However, it also makes the system vulnerable to something called a "bounce attack." Bounce attacks are outlined in RFC 2577, and involves attackers scanning other computers through an FTP server. Because the scan is run against other computers through the FTP server, it appears at face value that the FTP server is actually running the scans.This attack is initiated by a hacker who first uploads files to the FTP server.Then they send an FTP "PORT" command to the FTP server, using the IP address and port number of the victim machine, and instruct the server to send the files to the victim machine.This can be used, for example, to transfer an upload file con- taining SMTP commands so as to forge mail on the third-party machine without making a direct connection. It will be hard to track down the perpetrator because the file was transferred through an intermediary (the FTP server). Packet Sniffing FTP Transmissions As mentioned earlier in this section, FTP traffic is sent in cleartext so that credentials, when used for an FTP connection, can easily be captured via MITM attacks, eavesdropping, or sniffing. Exercise 5.03 looks at how easy it is to crack FTP with a sniffer. Sniffing (covered in Chapter 2) is a type of passive attack that allows hackers to eavesdrop on the network, capture passwords, and use them for a possible password cracking attack.
Another aspect of FTP that opens the system up to security problems is the third-party mecha- nism included in the FTP specification known as proxy FTP. It is used to allow an FTP client to have the server transfer the files to a third computer, which can expedite file transfers over slow con- nections. However, it also makes the system vulnerable to something called a "bounce attack." Bounce attacks are outlined in RFC 2577, and involves attackers scanning other computers through an FTP server. Because the scan is run against other computers through the FTP server, it appears at face value that the FTP server is actually running the scans.This attack is initiated by a hacker who first uploads files to the FTP server.Then they send an FTP "PORT" command to the FTP server, using the IP address and port number of the victim machine, and instruct the server to send the files to the victim machine.This can be used, for example, to transfer an upload file con- taining SMTP commands so as to forge mail on the third-party machine without making a direct connection. It will be hard to track down the perpetrator because the file was transferred through an intermediary (the FTP server). Packet Sniffing FTP Transmissions As mentioned earlier in this section, FTP traffic is sent in cleartext so that credentials, when used for an FTP connection, can easily be captured via MITM attacks, eavesdropping, or sniffing. Exercise 5.03 looks at how easy it is to crack FTP with a sniffer. Sniffing (covered in Chapter 2) is a type of passive attack that allows hackers to eavesdrop on the network, capture passwords, and use them for a possible password cracking attack.
PORT commands can also be used in FTP Bounce attacks, in which an attacking FTP client sends a PORT command requesting that the server open a data port to a differ- ent host than that from which the command originated. FTP Bounce attacks are used to scan networks for active hosts, to subvert firewalls, and to mask the true origin of FTP client requests (e.g., to skirt export restrictions). The only widely supported (RFC-compliant) alternative to active mode FTP is pas- sive mode FTP, in which the client rather than the server opens data connections. That mitigates the "new inbound connection" problem, but passive FTP still uses a separate connection to a random high port, making passive FTP only slightly easier to deal with from a firewall-engineering perspective. (Many firewalls, including Linux iptables, now support FTP connection tracking of passive mode FTP; a few can track active mode as well.) There are two main lessons to take from this discussion of active versus passive FTP. First, of the two, passive is preferable since all connections are initiated by the client, making it somewhat easier to regulate and harder to subvert than active mode FTP. Second, FTP is an excellent candidate for proxying at the firewall, even if your fire- wall is otherwise set up as a packet filter.
PORT commands can also be used in FTP Bounce attacks, in which an attacking FTP client sends a PORT command requesting that the server open a data port to a differ- ent host than that from which the command originated. FTP Bounce attacks are used to scan networks for active hosts, to subvert firewalls, and to mask the true origin of FTP client requests (e.g., to skirt export restrictions). The only widely supported (RFC-compliant) alternative to active mode FTP is pas- sive mode FTP, in which the client rather than the server opens data connections. That mitigates the "new inbound connection" problem, but passive FTP still uses a separate connection to a random high port, making passive FTP only slightly easier to deal with from a firewall-engineering perspective. (Many firewalls, including Linux iptables, now support FTP connection tracking of passive mode FTP; a few can track active mode as well.) There are two main lessons to take from this discussion of active versus passive FTP. First, of the two, passive is preferable since all connections are initiated by the client, making it somewhat easier to regulate and harder to subvert than active mode FTP. Second, FTP is an excellent candidate for proxying at the firewall, even if your fire- wall is otherwise set up as a packet filter.
Layer 4
The protocols that use a three-way handshake to transfer information can be found within which layer of the OSI model?
A.
Layer 2
B.
Layer 3
C.
Layer 4
D.
Layer 5
Layer 4: Transport Layer
Just above the Network Layer is the Transport Layer (Layer 4). The Transport Layer provides a valuable service in network communication: the ability to ensure that data is sent completely and correctly through the use of error recovery and flow control techniques. On the surface, the Transport Layer and its function might seem similar to the Data Link Layer because it also ensures the reliability of communication. However, the Transport Layer not only guarantees the link between stations; it also guarantees the actual delivery of data.
Connection Versus Connectionless
At the Transport Layer are the two protocols known as TCP and UDP; these protocols are known as connection and connectionless, respectively. Connection-oriented protocols operate by acknowledging or confirming every connection request or transmission, much like getting a return receipt for a letter. Connectionless protocols are those that do not require an acknowledgment and in fact, do not ask facknowledgmentone. The difference between these two is the overhead involved. Due to connection-oriented protocols’ need for acknowledgments, the overhead is more and the performance is less, while connectionless is faster due to its lack of this requirement.
From a high-level perspective, the Transport Layer is responsible for communication between host computers and verifying that both the sender and receiver are ready to initiate the data transfer. The two most widely known protocols found at the Transport Layer are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). TCP is connection-oriented, whereas UDP is connectionless. TCP provides reliable communication through the use of handshaking, acknowledgments, error detection, and session teardown. UDP is a connectionless protocol that offers speed and low overhead as its primary advantage.
OSI layers and common protocols.
| |
OSI REFERENCE MODEL LAYER
|
COMON PROTOCOLS AND APLICATIONS
|
Application
|
FTP, TFTP, SNMP, Telnet, HTTP, DNS, and POP3
|
Presentation
|
ASCII, EBCDIC, TIFF, JPEG, MPEG, and MIDI
|
Session
|
NetBIOS, SQL, RPC, and NFS
|
Transport
|
TCP, UDP, SSL, and SPX
|
Network
|
IP, ICMP, IGMP, BGP, OSPF, and IPX
|
Data Link
|
ARP, RARP, PPP, SLIP, TLS, L2TP, and LTTP
|
Physical
|
HSSI, X.21, and EIA/TIA-232
|
Cable distance and speed limitations
CABLE TYPE
|
CABLE NAME
|
ETHERNET DESIGNATION
|
TRANSMISSION SPEED
|
MAXIMUM SEGMENT LENGTH
|
Coaxial
|
RG-8/U
|
10Base5 (Thick Ethernet)
|
10 Mbps
|
500 meters
|
Coaxial
|
RG-58A/U
|
10Base2 (Thin Ethernet)
|
10 Mbps
|
185 meters
|
UTP
|
CAT3
|
10Base-T
|
10 Mbps
|
100 meters
|
UTP
|
CAT5
|
100Base-TX
|
100 Mbps
|
100 meters
|
UTP
|
CAT5e
|
1000Base-T
|
1,000 Mbps
|
100 meters
|
UTP
|
CAT6
|
1000Base-T
|
1,000 Mbps
|
100 meters
|
UTP
|
CAT6
|
10Gbase-T
|
10 Gbps
| |
UTP
|
CAT6a
|
10Gbase-T
|
10 Gbps
|
100 meters
|
Multimode
|
100Base-FX
|
100 Mbps
|
2 kilometers
| |
Multimode
|
1000Base-SX
|
1,000 Mbps
|
220–500 meters
| |
Fiber optic
|
Multimode
|
1000Base-LX
|
1,000 Mbps
|
550 meters
|
Fiber optic
|
Multimode
|
10Gbase-SR
|
10 Gbps
|
300 meters
|
Fiber optic
|
Singlemode
|
1000Base-LX
|
1,000 Mbps
|
2 kilometers
|
Fiber optic
|
Singlemode
|
10Gbase-LR
|
10 Gbps
|
10 kilometers
|
Fiber optic
|
Singlemode
|
10Gbase-ER
|
10 Gbps
|
40 kilometers
|
Subscribe to:
Posts (Atom)
-
Curso Wireshark na UDEMY https://www.udemy.com/curso-profissional-sobre-wireshark/learn/v4/overview A filtragem em sinalizadores...