Cloud computing is one of the hottest topics in IS today. Vendors are scrambling to gain market share in all delivery areas. SaaS is the most recognized form over IaaS and PaaS models. My research is in the area of trust development in SaaS and how the basic principles of security aid in trust formation. One of the more facinating (but outside the scope of my study) aspects of trust in cloud computing, or any technology for that matter, is the difference between real and percieved risk. There is an epistemological dichotomy between subjective and objective risk assessments. Within a controversial paper on societal risk, Starr (1980) states that risk assessments by the public are often uninformed and do not provide a measurement of actual risk, but are rather mere perceptions of risk and may lead to irrational behavior. No one would debate that there isn't a risk in cloud computing, but where does that risk differ from architectures of today? In the case of cloud-based services, the consumer has a very abstract view of the underlying security infrastructure. Any risk assessment performed could be considered uninformed in the strictest sense that Starr implies.
However, in another seminal work Slovic (1999) states that gender, race, political worldviews, affiliation, emotional affect, and trust are factors that have strongly correlated with risk judgments. This clearly refutes the claims of Starr and introduces several perspectives that will have subjective influence on risk assessments. Each variable introduced by Slovic may modify the trust formation in cloud computing. Thompson (1999) also rejects Starr’s view of risk as a stark contrast to the traditional conceptions of risk that shape law and public policy, which are based on subjective judgments of the archetypal “reasonable person.” That is to say, if a reasonable person would consider an act risky, then it is a risk to them. This philosophical viewpoint reflects the condition of an individual’s risk assessment of a technology, at least to the extent that it influences their decision to use the technology. Thompson stresses that Starr’s objective measure of risk is an assessment of probability that an event may or may not occur and does not account for perceptions of the layperson. This is a crucial point for SaaS as most users are from the general population and laypersons in the sense of computer science and security.
A grim reality is that most actual dangers or “real risk” are not realized until after decisions are made. Starr and Whipple (1984) state that benefits of a technology precede the risks of its use. Railroads, automobiles, and airplanes have greatly altered our social structure in positive and negative ways that were not predicted when put into action. The obvious benefits of reducing travel time and shrinking geographic limitations were realized immediately as were the profound positive effects on the economy and social culture. Risk such as pollution, traffic, drunk driving, and other user and non-user risks have been spread over space and time in a non-uniform distribution pattern (1984). Similarly, the use of cloud based services are expected to offer enterprises long-term IS savings, including reducing infrastructure costs of IT and pay-for-service models for avoiding waste (ISACA, 2007). These savings will precede any unforeseen losses of proprietary information, litigation due to service agreement breaches or lost client data, or other dangers that may lurk in the uncertain realm of the cloud model. This would seem to suggest that the real risks associated with cloud application use may not have any influence on trust formation since the perceived benefit of cloud use precedes any known risks. The hypothesis is that some initial trusting belief exists based on perceived risk.
The parlance on "real risk" versus "perceived risk" elucidates the fact that a person’s perception of risk in using cloud-based services may not be consistent with the actual risks that exist or the damages they may incur should they choose to use the service. Both sides of the argument also seem to agree that actual risks do not predict human behavior when making risk based decisions. This is not to say that the individual’s beliefs and perceptions of the security afforded by the vendor are not sufficient for making a subjective assessment to accept or reject the risk of SaaS cloud use. Rather, for personal risk assessments of technology for the common SaaS user, the objective assessment approach Starr suggests does not seem practical in cases of new technology such as cloud computing where there is a great deal of uncertainty. Starr assumes access to empirical information, which is not available to consumers who simply perceive risk through observation, cognition, and emotion. Thompson argues that the subjective approach is better suited than attempting to establish relative frequencies in nature and calculate probabilistic measurements of risk. This illustrates an important context differential between a formal risk assessment conducted by an organization and a personal risk assessment made by an individual. Even in the case of organizational risk assessments, the individuals involved will undoubtedly have some degree of personal perspectives, assumptions, political agendas, and other subjective influences on the results.
Slovic, P. (1999). Trust, emotion, sex, politics, and science: surveying the risk-assessment battlefield. Risk Analysis, 19, 687-701.
Starr, C. (1980). Introductory Remarks. In R. C. Schwing & W. A. Albers (Eds.), Societal risk assessment: How safe is safe enough? New York, NY: Plenum.
Starr, C., & Whipple, C. (1984). A perspective on health and safety risk analysis. Management Science, 30(4), 452-463.
Thompson, P. (1999). Risk objectivism and risk subjectivism: when are risks real? Risk: Issues in Health & Safety, 1(Winter), 3–22.