There are two main broad categories of risk with Web 2.0, social engineering and flaws in developer’s code. For people who are working web 2.0, having a risk table and mitigation standards for these two broad categories will help define policy and guidance when something bad happens.
Social engineering, no matter if it is pretexting, phishing, or any other means of getting people to give up information is not a new thing. But social networks make it all the more possible to influence people over a period of time to give up good information about their accounts, what they do, where they live, when they will be home, what they buy, how lonely they are, and any other human condition that a bad person can get a hook into. This is not a web 2.0 only kind of issue, rather it is an issue that has been with us for a very long time, and something that web 2.0 enhances because we interlink ourselves with each other. If I get a good deal, I want to share it with my “friends”.
Web sites like EBay have been dealing with various levels of fraud since almost day one. EBay has put into effect a number of technological and oversight into the process to develop a community of buyers and sellers with a high level of trust. However, there are always going to be people who try to abuse or otherwise misuse the trust environment to steal from others.
MySpace has also been dealing with the growing pains of having a highly interconnected group of people that interact often without any form of oversight. Since anyone can do anything (relatively) on MySpace, they have had to put into practice technology and people to solve problems that come along with a highly connected group of people who might just not all like each other. Or who will use the social space to lure or attract victims.
Xbox, HP, Verizon, and a host of other companies have also learned that the person on the end of the phone, even if they have all the right answers to secret questions, home addresses, and last 4 of the social security number may not actually be the customer that is requesting information or changing around service and billing.
Important intellectual property or private corporate data accidentally or intentionally released on the network via blogs or other vehicles is also something that many companies from Google to HP to Dell have all experienced. These unintentional releases of internal information also have a major impact upon a business. People are always interested in what a big company is doing, and any leverage helps an investor or rival learn about the company, what its products are, or if there is a resource or other constraint issue.
Building a trust ecology or trust system when people will knowingly try to take advantage of the system or inadvertently release critical information is difficult at best. But when working in a social framework of user generated content that often is unregulated, the ability to abuse the system is much easier by making fake profiles, or otherwise using the trust model against itself. The very real idea that anyone who is attached to a web 2.0 system from the social or personal context might not be well wishing is a Public Relations and Human Resources issue. Technology might solve some issues, but in the bottom line of social engineering, the people who are involved in the process are the first and last line of defense.
Programmatic errors, or errors in source code, API’s, frameworks, JavaScript or other code that provides the functional interface between the user and the back end systems is also a point of entry for the bad person. Hacking your own web sites before release, monitoring what activity is normal for a web site, securing backend systems and other intermediary systems is classic information security. Things we all should be doing on a routine basis. Even still errors will creep into the system and even the most flawless code could have a hook or function that allows the hacker to get into the backend system.
Being classic infosec, we know we should be doing this on a routine basis, audit everything at least every 90 days to make sure that undocumented changes, or unannounced updates to web sites are at least caught if the audit or security department had no idea that anyone was releasing an update. A better alternative is to have a representative from the security department working with the business unit as they build or buy their technology. Help the business unit do work and make money, but do it in a way that the risks are known and the technology is evaluated for both functionality and security at the same time.
Developing a risk table around these two major categories of security issues can go a long way in helping a company determine how to address issues before they become an issue. As well, having a table, or documentation along these lines of thinking and having any implementation of them can also show due diligence and due care later on if they are needed. Working with the business unit that is getting technology to work through their business requirements is also a good way to develop good relations between infosec, business and developers. Having an audit plan, and executing on that audit plan also shows good due care and due diligence in the longer run. You would much rather find mistakes than someone from outside the organization finding mistakes.