I was at Microsoft’s Redmond offices in late May and having been exposed to some serious (and technical) things related to security, malware and terms like 'fuzz testing' and 'threat modelling', managed to wrap my head around charts and slides to get a general over-arcing glimpse of what it is exactly that the Microsoft Security team does ever since Gates and Mundie forged ahead with the Trustworthy Computing initiative in 2001.
There were some 20 journalists from as far as Australia, Germany, France, Malaysia and Singapore over three days of Seattle-ish drizzle and coffee, but we weren't there for the desserts. What we got was a series of behind-the-scenes tell-all sessions on what really happens (or had happened) in the world of software security, at least for products like Windows, Internet Explorer, Hotmail, Azure (Microsoft’s Cloud computing initiative) and more. I'm not going to bore you with the technical stuff, so here's a general theme of what went on.
Microsoft’s Trustworthy Computing initiative came about after a succession of vulnerabilities was uncovered in 2001. According to Microsoft’s Steve Lipner, Senior Director, Security Engineering Strategy, Trustworthy Computing Security, 2001 saw the appearances of the Code Red and Nimda Internet worms. One after another, they hit networks in clear successions, but they were just icing on the cake. When a vulnerability within the Universal Plug-and-Play (UPnP) code in Windows XP was discovered, it became clear that 2001 wasn't really a great year for Microsoft in the security department (do note that it was also the year the Twin Towers collapsed so it was pretty nerve-wracking).
“We thought we did a fairly decent job with Windows XP, but the UPnP vulnerability surfaced 3 to 4 months after XP was shipped. By the end of 2001, it was pretty evident that we couldn’t continue as we were going and that led to Craig Mundie and Bill Gates launching the Trustworthy Computing initiative,” Lipner said. By February 2002, the entire Windows division had to be shut down and everybody, some 85,000 people, were sent to classes for five days, learning stuff like threat modelling, code reviews, tools usage, penetration testing and so on.
“By the summer of 2002, we were done with the security push exercises. We then built a team that was capable of finding new vulnerabilities and that birthed the security science we have today.” The chart below was taken from one of Lipner's slides showing the total number of vulnerabilities of different OSes (RHEL4 is Red Hat Enterprise Linux 4) within the first year of their respective releases. Notice that Windows XP had close to 125 vulnerabilities within the first year of launch (fixing about 60-70 of them), while Vista, which was launched 2007, had lesser vulnerabilities (about 60-70), with 40-45 of them fixed.
Part of the Trustworthy Computing initiative came to fruition in July 2004. Called the Security Development Lifecycle, or SDL, it is a process (some call it a culture), that runs alongside every single software product or solution lifecycle, be it Windows, Office, Hotmail, Exchange, SharePoint or Azure, from pre- to post-release.
Lipner believes that SDL runs across three levels: Secure by Design (the architecture), Secure by Default (deployment and defence) and Continuous Improvement (never sit on your laurels). In a nutshell, SDL can be summarized by the flow chart below, where the green bars reflect the core trust of the lifecycle, backed by education and accountability.
While there are about 65-70 people in the SDL team, the process has to be translated across to some tens of thousands of developers and managers, across continents, 24/7. So SDL encompasses online training courses, automated tools and frameworks and above all, ease of use. For a division that only had three security engineers in 2001, getting to where they are today is no small endeavour.
If you're wondering about Windows 7, the chart below demonstrates the effectiveness of SDL on the last two major operating systems from Microsoft.
When it comes to malware, there are two types of vulnerabilities – drive-by malware and socially engineered malware. According to John Scarrow, General Manager for Safety Services, drive-by malware are malicious code that infects a vulnerability within a software without the user even knowing it (normally happens when the user visits an infected site), while socially engineered malware normally infects via the social trustworthiness within the user through luring tactics like friendly email attachments. Between the two, socially engineered malware has become far more intense than drive-by.
“Conficker (a drive-by Internet worm which surfaced in 2008) was one of the most publicized vulnerabilities ever known during its entire lifespan," Scarrow said. "Most analysts were looking at about 4.5 million infected systems from that worm. While that’s a big scary number, Internet Explorer makes over 3 million blocks everyday on socially engineered malware. So versus Conficker, socially engineered malware is clearly, the bigger attack surface area than drive-by. ”
Several other conversations we had with Microsoft’s Security team reveal how software updates (Patch Tuesdays) is often a cat-and-mouse game with malicious coders who release exploits (often on Wednesdays). Brad Albrecht, Senior Security Program Manager for Microsoft Office talked about keeping Office documents operating within a sandbox, and if a user should introduce an external item in (ie. file attachment or data from an external thumbdrive), the program will prompt the user to make specific trust decisions (which the software remembers).
Ultimately, security issues dogging enterprises like Microsoft today are far more complex than just automated workflows, sandboxing documents and vulnerability fixed/unfixed ratings. As attacks and exploits originate from different countries, the legalities of where geographically the secure data is hosted, what happens if data hosted on the Cloud is compromised in one country but the data center is geographically located in another, and so on have to be resolved. In summary, Microsoft's Lipner shared this thought on Day 3 of our session.
“I think if you’re doing data on the Internet, there’s a set of risks you’re exposed to and you have to manage those risks, whether you’re operating your own server in a data server in your building or in the cloud. It’s important that you think about the risks and compliance requirements, law and sensitivities of your data and get the assurances from your provider. The picture is not all one-sided.”
Well-said, and well-covered.
Terence Ang used to be the Supervising Editor for the New Media division in Singapore, where he worked with the editorial teams behind HardwareZone.com and HWM the magazine. In that role, he looked at ways the teams in Singapore can collaborate with the Editors in Malaysia, Philippines, Indonesia and Thailand. Terence is currently the Product Manager but contributes to the blog section whenever he can (or finds something interesting to talk about).