[Ref] “the butterflies of Zagorsk”: an analysis of principles of characteristic defectology Vygotskyan

ABSTRACT : The article in question seeks to reflect on the documentary entitled “Zagorski Butterflies” prepared by the BBC in 1992. The subject of this  documentary  are  disabled  children,  more  precisely  deafblind.  In Vygotsky’s theory deficiencies are characterized as belonging to the field of disabilities. We emphasize among the subjects, the protagonist Natasha, and analyze their lines from the perspective of the postulates of Historical – Cultural Psychology. We have chosen to guide this reflection educational principles employed by members of the Defectology Institute of Moscow to promote children’s learning and the transcription of some lines of the character Natasha with a view to analysis of some psychological concepts of disability and its overcoming. We base all analysis in teaching met hods developed by Vygotsky, focusing on the Zone of Proximal Development (ZPD) and the historical and social construction of knowledge.

KEYWORDS :  Vygotsky.  Defectology.  Educational  principles. Language. Teaching and Learning

Article: http://www.uel.br/revistas/uel/index.php/histensino/article/viewFile/17933/15995

[Ref] How to Build a Collaborative Office Space Like Pixar and Google

When the Second World War ended, universities struggled to cope with record enrollments. Like many universities, the Massachusetts Institute of Technology built a series of new housing developments for returning servicemen and their young families. One of those developments was named Westgate West. The buildings doubled as the research lab for three of the greatest social scientists of the 20th century and would come to reframe the way we think about office spaces.

In the late 1940s, psychologists Leon Festinger, Stanley Schachter, and sociologist Kurt Back began to wonder how friendships form. Why do some strangers build lasting friendships, while others struggle to get past basic platitudes? Some experts, including Sigmund Freud, explained that friendship formation could be traced to infancy, where children acquired the values, beliefs, and attitudes that would bind or separate them later in life. But Festinger, Schachter, and Back pursued a different theory that would go on to shape the thinking of contemporary prophets from Steve Jobs to Google’s Sergey Brin and Larry Page.

The researchers believed that physical space was the key to friendship formation; that “friendships are likely to develop on the basis of brief and passive contacts made going to and from home or walking about the neighborhood.”¹ In their view, it wasn’t so much that people with similar attitudes became friends, but rather that people who passed each other during the day tended to become friends and later adopted similar attitudes.

Screen-Shot-2013-06-06-at-10.30.58-AM-550x218

Festinger and his colleagues approached the students some months after they had moved into Westgate West, and asked them to list their three closest friends. The results were fascinating—and they had very little to do with values, beliefs, and attitudes. Forty-two percent of the responses were direct neighbors, so the resident of apartment 7 was quite likely to list the residents of apartments 6 and 8 as friends—and less likely to list the residents of apartments 9 and 10. Even more striking, the lucky residents of apartments 1 and 5 turned out to be the most popular, not because they happened to be kinder or more interesting, but because they happened to live at the bottom of the staircase that their upstairs neighbors were forced to use to reach the building’s second floor. Some of these accidental interactions fizzled, of course, but in contrast to the isolated residents of apartments 2 and 4, those in apartments 1 and 5 had a better chance of meeting one or two kindred spirits.

Westgate West as Inspiration for Pixar

Half a century passed, and the Westgate West message began to infiltrate office culture. Steve Jobs famously redesigned the offices at Pixar, which originally housed computer scientists in one building, animators in a second building, and executives and editors in a third. Jobs recognized that separating these groups, each with its own culture and approach to problem-solving, discouraged them from sharing ideas and solutions.

Pixar's office via >a href="http://www.fubiz.net/2010/05/17/pixar-office/">Fubuz

Pixar’s office, designed to encourage collaboration – via Fubuz

Perhaps the animators could introduce a fresh perspective when the computer scientists became stuck; and maybe the executives would learn more about the nuts and bolts of the business if they occasionally met an animator in the office kitchen, or a computer scientist at the water cooler. Jobs ultimately succeeded in creating a single cavernous office that housed the entire Pixar team, and John Lasseter, Pixar’s chief creative officer, declared that he’d “never seen a building that promoted collaboration and creativity as well as this one.”

Google’s “150-Feet From Food” Rule

Google’s New York City campus capitalizes on many of the same ideas. The growing campus already has a massive footprint, occupying an entire floor (and part of some other floors) in a building that covers a city block in Manhattan’s Chelsea neighborhood. The elevators that link these floors are notoriously slow, so instead of forcing workers to wait, the architects built vertical ladder chutes between adjacent floors. Workers are encouraged to “casually collide,” an aim that echoes Jobs’ encouragement of “unplanned collaborations.”

When I visited the campus in March, my guide explained that no part of the office was more than 150 feet from food—either a restaurant, a large cafeteria, or a micro-kitchen—which encourages employees to snack constantly as they bump into coworkers from different teams within the company. Even if Google workers aren’t constantly generating new ideas, plenty of evidence suggests that they enjoy their work, and that this enjoyment feeds into motivation and eventually greater productivity.

Festinger and his colleagues were right to focus on physical space when they explored how friendships form—but what made their investigation doubly impressive was how deeply their insights influenced the corporate world’s smartest thinkers fifty years in the future. People with similar attitudes are more likely to get along, those with diverse backgrounds are more likely to generate novel ideas, but none of those interactions exist without the primary ingredient of casual encounters and unexpected conversations.The key features that make for a collaborative office space:

  • An open plan and other design features (e.g., high-traffic staircases) that encourage accidental interactions.
  • More common areas than are strictly necessary—multiple cafeterias, other places to read and work that encourage workers to leave confined offices.
  • Emphasis on areas that hold two or more people, rather than single-occupancy offices.
  • Purpose-free generic “thinking” areas in open-plan spaces, which encourage workers to do their thinking in the presence of other people, rather than alone.

What About Your Workspace?

What office features do you think make for a more collaborative workspace?

More insights on: Office Dynamics

[Ref] Knowledge Transfer: You Can’t Learn Surgery By Watching

08 SEP 2015 RESEARCH & IDEAS

Learning to perform a job by watching others and copying their actions is not a great technique for corporate knowledge transfer. Christopher G. Myers suggests a better approach: Coactive vicarious learning.

by Michael Blanding

While some lessons can be learned by watching—a parent’s reaction after touching a hot stove can be a good lesson for a youngster on dangers in the kitchen—other lessons are harder to learn through observation alone. No matter how many times you watch a surgeon perform open-heart surgery, chances are you won’t ever learn how to pull off a triple bypass.

And yet, in business, companies routinely expect employees to pick up new job knowledge through vicarious learning—through reading descriptions of tasks in knowledge-management databases or by observing colleagues from afar. “The predominant analogy for vicarious learning is the photocopier,” says Christopher G. Myers, assistant professor of Organizational Behavior at Harvard Business School. The idea: Watch what other people do, make copies of the good things and dispose of the bad things, and we are good to go.

But good knowledge transfer doesn’t quite happen that way, and organizations that practice watch-and-learn vicarious learning run the risk of undertraining their key employees, says Myers.

He challenges the theory in a new working paper, Coactive Vicarious Learning: Towards a Relational Theory of Vicarious Learning in Organizations, in which he argues that observation and imitation are rarely the best ways for employees to learn on the job.

“There are some realms of life where that is true, but for the most part, problems in business are more complicated,” says Myers.

“WHAT THAT MEANS PRACTICALLY IS VICARIOUS LEARNING MUST BE MORE INTERACTIVE”

The limitations of traditional forms of knowledge management come from two sets of assumptions, he argues.

The first assumption is that the most important elements of a job function are observable, ignoring the crucial tacit knowledge that can influence how someone carries out his or her job. “I could watch a colleague challenge a student and I could think that’s the way I should teach, but what I miss is the backstory, about why he is doing it in that particular case.”

Perhaps even more crucial, those systems assume that the person undertaking the learning wants to duplicate exactly what the other person is doing—despite the fact that they may be perpetuating mistakes made by a predecessor or simply following procedures that may be a bad fit for a person of a different personality and skillset.

New research suggests the best way to transfer knowledge in corporate
roles is to improve upon traditional vicarious learning models.©iStock.com/cacaroot

Instead, Myers envisions a model of coactive vicarious learning.

“The major shift theoretically is moving from a language of transfer, of taking fully formed knowledge and passing it from one person’s head to another, and instead talking about co-creation and building it together,” he says. “What that means practically is vicarious learning must be more interactive. Both the learner and the sharer of knowledge bring things to the table and together create something new.” Myers was inspired to study the topic based on his own experience as an outdoor wilderness instructor, an area in which the cost of failure is too high for people to learn only from their own experience. “Trial and error is not the way you want to learn rock climbing,” says Myers.

When acquiring knowledge is life or death

In his own research, he has spent hundreds of hours studying a similarly fraught industry—high-risk medical transport teams—to learn how they acquire knowledge that can literally mean the difference between life and death. He found that much of their vicarious learning occurred not through studying procedural manuals, but through informal storytelling in the downtime between missions, in which team members related past incidents. “They would dig in with each other and dissect prior cases a little bit, asking, ‘Why did you do it this way, and not that way?’ It’s happening in these more discursive kind of ways.”

By contrast, Myers argues that many companies employ knowledge management systems that favor more independent, rather than discursive, learning. “They say, ‘I am going to write everything down and everyone who wants to know anything about the industry can have that document at their fingertips,’” says Myers. Except this system often doesn’t get used.

That fact was driven home to Myers while interviewing a tech manager. “He said, ‘I use the knowledge management system all the time—but I just scroll down to the bottom and see who wrote it, and then I call them.’”By trying to prepare content that will suit everyone and cover every situation, the authors necessarily strip out the nuance of how things really work in a corporate context. “What we often get in a knowledge management system is the least common denominator,” says Myers. “It’s the bare minimum of what I am willing to share with the world, as I worry about what might get back to me.”

But if coactive learning is key, the difficulty is figuring out how to incorporate it into an organization. Myers makes a distinction between on-line learning, which occurs at the same time work is being performed, and off-line learning, which happens after the fact. Both, he says, can be either independent or coactive.

Watching surgery through the observation glass, for example, may be on-line learning, but it doesn’t give a doctor enough of an appreciation for how to perform a surgery. In order to be successful, says Myers, workers shadowing others must be able to ask questions and debrief in a way that allows them to synthesize information for their own approach and personality.

Off-line learning, meanwhile, doesn’t have to be done alone. The stories told by the medical transport teams—or for that matter, stories told at happy hour by employees at the bar around the corner from the office—can go a long way toward informally educating employees on how the company really works. Managers can help institutionalize this kind of learning in a number of ways. Instead of a knowledge management system consisting of dry reports on job duties, companies can create narrative simulations that are more interactive, allowing employees to debrief with managers about what they did right and wrong.

(Myers notes that Harvard Business School’s own faculty “onboarding” process includes experiences where junior faculty teach a mock class to a lecture hall full of their colleagues, who play the roles of students and mentors in providing feedback after the fact.)

Interactive learning through office norms

Less formally, however, managers can create more interactive vicarious learning through the way they structure the office and create cultural norms. Google built its New York City campus two years ago with the principle that employees should never be more than 150 feet from some form of free food—creating gathering places for workers to interact. “They wanted to engineer serendipity,” says Myers, “creating these kind of casual hallway and cafeteria conversations where the genesis of a lot of great ideas take place.”

Managers don’t have to redesign a building to engineer these encounters. Just by observing where employees naturally congregate and then tacitly condoning those conversations or actively participating in them can go a long way toward normalizing the kind of office culture that encourages employee interaction, says Myers.

Managers can take it to the next level by including a regular time for coactive vicarious learning during meetings, asking for stories about something employees figured out in the past week, and setting expectations for those kind of discussions.

Some senior employees in the high-tech industry set office hours in which they encourage colleagues to come to their office and ask questions. “A lot of this is clearing the brush,” says Myers. By subtly changing office culture to encourage employees to interact in both formal and informal ways, he argues, managers can move employees away from the kind of dry learning that stymies growth and creativity and toward the kind of co-created knowledge that allows employees to really make the job their own.

“Managers can’t say ‘Go sit down and have a good conversation about your past experiences,’ but they can set up structures to let his happen naturally,” says Myers. “And that becomes very critical.”

ABOUT THE AUTHOR

Michael Blanding is a senior writer for Harvard Business School Working Knowledge.

[Ref] A general theory of software engineering: Balancing human, social and organizational capitals

Authors: Claes Wohlina, , Darja Šmitea, Nils Brede Moea, b,

Article Ref: doi:10.1016/j.jss.2015.08.009

Highlights

Software engineering is knowledge-intensive and intellectual capital is crucial.

Intellectual capital may be divided into human, social and organizational capitals.

A theory of software engineering is formulated based on these three capitals.

The theory is based on industrial observations and illustrated in a case study.

The theory may be used by both researchers and practitioners.


Abstract

There exists no generally accepted theory in software engineering, and at the same time a scientific discipline needs theories. Some laws, hypotheses and conjectures exist, but yet no generally accepted theory. Several researchers and initiatives emphasize the need for theory in the discipline. The objective of this paper is to formulate a theory of software engineering. The theory is generated from empirical observations of industry practice, including several case studies and many years of experience in working closely between academia and industry. The theory captures the balancing of three different intellectual capitals: human, social and organizational capitals, respectively. The theory is formulated using a method for building theories in software engineering. It results in a theory where the relationships between the three different intellectual capitals are explored and explained. The theory is illustrated based on an industrial case study, where it is shown how decisions made in industry practice are explainable with the formulated theory, and the consequences of the decisions are made explicit. Based on the positive results, it is concluded that the theory may have a good explanatory power, although more evaluations are needed.

Graphical abstract

Image, graphical abstract

Keywords

  • Software engineering theory;
  • Intellectual capital;
  • Empirical

1. Introduction

Software development is a very knowledge-intensive activity. It is an engineering endeavour involving a lot of design, and the production is relatively simple. To develop software many different people interact within an organization. Thus, software development is hugely dependent on people (DeMarco and Lister, 2013). However, people alone are insufficient. Software development is to a very large extent a team effort, and hence the interaction between people and the complementarity in expertise are prerequisites to be successful. Furthermore, the organization in which the people work provides the infrastructure and environment to be able to leverage on the individual skills and their combined value. The organizational aspects relate to processes, methods, techniques and tools being part of the work environment. These three aspects are captured in the concept of intellectual capital. The objective of the paper is to formulate a general theory of software engineering from empirical observations of how industry actively works with human, social and organizational capitals (components of intellectual capital) to help explaining and reasoning about combinations of intellectual capital components (ICCs) to be successful in software development.

Intellectual capital may be defined as: “the sum of all knowledge firms utilize for competitive advantage” (Nahapiet and Ghoshal, 1998 and Youndt et al., 2004). The sum of all knowledge means that the concept of intellectual capital encompasses all assets available to a company. Different divisions of intellectual capital into components exist. Here it is chosen to use the division discussed by Youndt et al. (2004). Some alternative divisions are briefly introduced in Section 2.1. Youndt et al. (2004) divide the general concept of intellectual capital into three ICCs: human capital, social capital and organizational capital. They are depicted in Fig. 1 together with the main level where it primarily resides, i.e., individual, unit and organizational, respectively. The ICCs are described in Section 2.

Here, the concept of a unit is used to denote an entity utilizing the three components of intellectual capital: human, social and organizational capitals, respectively. The unit may be a team, a department or any other entity for which it is relevant to discuss the concept of ICCs. A unit includes people, who possess a certain level of human capital through their experiences and expertise. It also has a social capital both in terms of how it can leverage on the social interaction within the unit, and how it uses its external contacts to create value. The external contacts and networks may include customers, internal people in the organization, or external networks (including communities of practice, blogs and other external contacts and information). The unit exists in a context, which provides the organizational capital, for example, the support available to software engineers in terms of infrastructures. The latter includes all aspects of an organization that remain if removing all humans.

From the above reasoning, it becomes clear that the different components of intellectual capital are what make it possible to develop software. Based on this observation, this article contributes with formulating a theory of software development that captures the balancing of the ICCs that software organizations use in practice. Thus, the formulation of the theory is based on observations of practice and the insight that although organizations are different, they have a similar challenge. They need to balance the ICCs to be able to conduct their business in a cost-effective and competitive way. Balance refers to compensating loss in one ICC with improving either the same ICC or at least one of the other ICCs. The article presents the theory formulated and its constituents. Furthermore, it illustrates the theory in a real industrial case and also provides some examples taken from industrial collaboration.

Fig 1
Fig. 1.

Intellectual capital and its three components.

The remainder of the article is structured as follows. Related work is presented inSection 2. Section 3 introduces the theory based on the steps recommended by Sjøberg et al. (2008). The theory is exemplified and illustrated by an empirical case in Section 4. In Section 5, a discussion is provided and the article is concluded in Section 6.

2. Related work

2.1. Intellectual capital and software engineering

In software engineering, there has been much discussion about how to manage knowledge, or foster “learning software organizations”. In this context, Feldmann and Althoff have defined a “learning software organization” as an organization that is able to “create a culture that promotes continuous learning and fosters the exchange of experience” (Feldmann and Althoff, 2001). Dybå places more emphasis on action in his definition: “A software organization that promotes improved actions through better knowledge and understanding” (Dybå, 2001).

Because software development is knowledge-intensive work, intellectual capital is a particularly relevant perspective for software companies. Intellectual capital is called the main asset of software companies (Gongla and Rizzuto, 2001 and Rus and Lindvall, 2002). It is seen as a construct with various levels (individual, network, and organizational) (Youndt et al., 2004). As mentioned above, Youndt et al. (2004) divide intellectual capital into three components: human, social and organizational capitals. This is not the only proposal for how to describe intellectual capital. Stewart (2001)describes the essential elements or assets that contribute to the development of intellectual capital as:

Structural capital: Codified knowledge that can be transferred (e.g., patents, processes, databases, and networks).

Human capital: The capability of individuals to provide solutions (e.g., skills and knowledge).

Customer capital: The value of an organization’s relationships with the people with whom it does business and share knowledge with (e.g., relationships with customers and suppliers).

The possession of each of these assets alone is not enough. Intellectual capital can only be generated by the interplay between them. Therefore, Willcocks et al. (2004) propose a framework, which also includes a fourth kind of ICC—social capital. Social capital helps to bring structural, human and customer capital together and encourages interplay among them.

Here it has been chosen to use the division of intellectual capital advocated by Youndt et al. (2004) for two main reasons. First, we agree with Youndt et al. that organizational capital is more fitting than the term structural capital because this is capital the organization actually owns (human capital can only be borrowed or rented). Second, both frameworks define social capital to consist of knowledge resources embedded within, available through, and derived from a network of relationships. We support Youndt et al.’s argument that such relationships are not limited to internal knowledge exchanges among employees, but also extend to linkages with customers, suppliers, alliance partners, and the like. We then see customer capital as part of social capital.

Table 1.Types of intellectual capital based on the synthesis by Youndt et al. (2004) and examples by Moe et al. (2014).

Intellectual capital Definition Specific examples
Human capital The “skill, knowledge and similar attributes that affect particular human capabilities to do productive work” which can be improved through health facilities, on-the-job training, formal education and study programmes (Schultz, 1961, pp. 8–9). This capital resides with, and is utilized by individuals. Domain knowledge; Knowledge about programming, practices, languages and architecture.
Social capital The actual and potential resources embedded within, available through, and derived from the network of relationships possessed by an individual or social unit. Relationship between team-members, network of experts, participating in external forums, communication coding and architectural conventions; Trust in people outside the unit; Pride of and identification with product.
According to Nahapiet and Ghoshal (1998) social capital have three main dimensions: structural (including network ties, network configuration and appropriable organization), cognitive (including shared codes and language and shared narrative) and relational (including trust, norms, obligations and identification) (ibid).
Organizational capital The possessions remaining in the organization when people go home after work. This includes the “institutionalized knowledge and codified experience residing within and utilized through databases, patents, manuals, structures, systems and processes” (Youndt et al., 2004). Software source code; Documentation; Documented work processes.

Creating intellectual capital is more complicated than simply hiring bright people. The importance of intellectual capital can be demonstrated by the ratio of intellectual capital to physical capital involved in the production of software. Symptomatically, the ratio of the software development industry is found to be seven times the ratio of other industries that are heavily reliant on physical capital, such as the steel industry (Bontis, 1997, Bontis, 1998 and Tobin, 1969). In a study on intellectual capital in Systematic Software Engineering Ltd, Mouritsen et al. (2001) found that the main motivation for understanding the different elements of intellectual capital was to make the company’s knowledge resources and key competency areas visible and to monitor management’s efforts to develop these. Also, management wanted to establish a new basis for deciding about the future of the company.

Youndt et al. (2004), through their review of intellectual capital, conceptualize intellectual capital through the three distinct components: human, social, and organizational. Human capital refers to individual employee’s knowledge, skills, and abilities. In software engineering these are often associated with technical skills including design expertise, domain knowledge and product knowledge (Faraj and Sproull, 2000 and Moe et al., 2014). Organizational capital represents institutionalized knowledge and codified experience stored in databases, routines, patents, manuals, infrastructures, and the like. Many traditional software companies that follow plan-driven approaches believe that a good process leads to a good product, and thus standardized and well-documented processes support developers, while interaction among software developers is usually minimized. Finally social capital consists of knowledge resources embedded within, available through, and derived from a network of relationships possessed by an individual or a social unit. Social capital is both the network and the assets that may be mobilized through that network (Bourdieu, 1986). It enables achievements that would be impossible without it or could only be achieved at an extra cost. Also, because social capital increases the efficiency of information diffusion, a company can have less redundancy in, e.g., skills or roles if the social capital is strong. An organization supports the creation of social capital when it brings its members together in order to undertake their primary task, to supervise activities, and to coordinate work, particularly in the context requiring mutual adjustment.

Different ICCs belong on different levels—individual, unit or organizational levels. While human and organizational capital components are rather straightforward, social capital is a more complex phenomenon. In the research on social capital, scholars have tended to adopt either an external viewpoint (the relations an actor maintains with other actors) or an internal viewpoint (the structure of relations among actors within a grouping) (Adler and Kwon, 2002). The distinction between the external and internal views on social capital is, to a large extent, a matter of perspective and unit of analysis. The relations between an employee and colleagues within a unit are external to the employee but internal to the unit. Because, the capacity for effective software development in a unit is typically a function of both its internal linkage and its external linkage to other units and experts, we have adopted the view of Nahapiet and Ghoshal (1998), who describe the social capital as both internal and external to a unit.

A summary of the definitions of the three different components of intellectual capital as described by Youndt et al. is given in Table 1. In this table, the information on how the concepts synthesized by Youndt et al. (2004) link to software engineering is provided as based on Moe et al. (2014).

It is possible that organizations can develop these individual dimensions of intellectual capital independently. For example, targeting hiring strategies of experts in specialized areas could help to acquire human capital. Similarly, procuring particular databases or investing in the installation of specific systems and processes could create organizational capital. Accumulation of social capital can be fostered by, e.g., establishing communities of practice and regular forums for interaction. However, there are strong interdependencies in the creation, development, and leveraging of the three components of intellectual capital. Organizational learning theorists (Nonaka and Takeuchi, 1995 and Schön, 1983) point out that organizations do not create knowledge; rather people, or human capital, is the origin of all knowledge. And when people share or exchange tacit knowledge, this is most likely to be done through discussions. Also it is suggested that: “individual learning is a necessary but insufficient condition for organizational learning” (Argyris and Schön, 1996). In order for organizational level learning to occur, individuals should exchange and diffuse shared insights and knowledge, that is, use their social capital. Also, social capital helps in creating new knowledge among individuals and for organizational learning to occur. Therefore social capital has been found to be important in the development of human capital. And ultimately, much of the knowledge individuals create through human capital and diffuse through social capital becomes codified and institutionalized in organizational databases, routines, systems, manuals, and the like, thereby turning into organizational capital.

2.2. Theories in software engineering

The need for a firm theoretical basis for software engineering has been emphasized since the infancy of the area as exemplified by Freeman et al. (1976). Specific theories for software engineering were also proposed such as Musa’s (1975) theory with respect to estimation of software reliability. The field has progressed since but there are still no commonly accepted theories for software engineering (Ralph et al., 2013). The need to build theories has been emphasized by, for example, Sjøberg et al. (2008) and more lately by Johnson et al. (2012). Thus, there is a drive to obtain a stronger theoretical foundation in software engineering.

In addition to theories, laws and empirical observations have helped to increase the understanding of the discipline as described by Endres and Rombach (2003). Some examples include Conway’s law (Conway, 1968) with respect to the relationship between system structure and organization, and Lehman’s laws (Lehman, 1979) on software evolution.

Conway’s law describes how the organization and software structure mirror each other.Endres and Rombach (2003) take it one-step further and explain how the law can be interpreted as a theory, since there is a logical explanation to the law. Their explanation is that software system development is more of a communication problem than a technical problem, and hence the organization and the software structure are highly likely to be aligned.

Lehman puts forward five laws on software evolution (Lehman, 1979). His first two laws are used as examples here. The first law states that a system that is used will be changed. The second law relates to complexity and describes how a software system will become more complex as it evolves if specific actions are not taken to reduce complexity. Both these laws have logical explanations, and hence Endres and Rombach describe how they can be interpreted as theories.

Some of the laws described by Endres and Rombach (2003) are well established in both research and practice and others are not, and some of them can be turned into theories. However, since software development is a very knowledge-intensive activity involving a lot of people, there is a need for a theory that relates software engineers, software engineering team(s), software engineering project(s), or software engineering organization(s) etc., to the development and evolution of software system(s). A software engineering theory taking human, social and organization capitals into account in software engineering is lacking. This article attempts to fill this gap by contributing with a theory taking a broad perspective on software engineering, including human, social and organizational capitals.

3. Theory formulation

3.1. Background

The theory is inspired by the authors’ observations of industry practice. Research conducted by the authors in the past five years has brought about vivid manifestations of the ways software organizations approach ICCs in practice (Moe et al., 2014 and Šmite and Wohlin, 2011). Specifically, the research conducted has focused on how several industrial partners practice global software development, execute software product transfers (relocation of development from one team or set of teams to a new team(s), often in different locations) and manage the challenges related to such transfers (ibid). As a side effect, it has over the years been observed how companies make decisions to compensate for issues related to consequences of transfers, often in terms of the ICCs. In general, it has been observed that if actions are not taken, a transfer will mean a loss of experience and expertise in relation to the product, and hence a decline in human capital (in this case product knowledge and potentially domain knowledge), which has often a direct impact on development capabilities and a secondary impact on quality. Furthermore, it has been observed that after a transfer the new teams involved with a product are more dependent on the documentation and support in the organization than the experienced developers used to be before the transfer, i.e. the new teams depend more heavily on the organizational capital and the social capital (in particular in relation to the teams conducting the development before the transfer). Some specific examples:

Example 1: The product documentation was deemed insufficient for a transfer, and hence nine person-months were spent on improving the product documentation before transferring a software product (Šmite and Wohlin, 2010).

Example 2: A gradual transfer (Wohlin and Šmite, 2012) was conducted, i.e., joint development between sites was organized before transferring the software product. This resulted in a competence build up in the receiving site, while leveraging on the presence and active involvement of the original developers.

Example 3: Temporal relocation of experts from the sending site with the product to the receiving site has been seen as a common practice to ensure the presence and accessibility of expertise and to transfer knowledge to the teams receiving the software product (Šmite and Wohlin, 2010).

The three examples together with access to both product and project artifacts, and continuous discussions with practitioners in different roles at the companies have resulted in a general observation: companies try to compensate a potential loss in one component of intellectual capital with different countermeasures, either in relation to the same component of intellectual capital (e.g., human capital—send an expert) or in another component of intellectual capital (e.g., organizational capital—improve the software product documentation, or social—foster interaction with remote experts from the original site). Thus, it has been observed that there is an interplay between different components of intellectual capital that companies try to master to ensure that the setting for the software product development or evolution is fit for its purpose, including the type of tasks to accomplish and the objectives in terms of, for example, delivery time and quality.

Based on the observations from the long-term collaboration with industry, in particular in the area of global software engineering, the objective here is to formulate a general theory for software engineering including the different components of intellectual capital.

3.2. General theory formulation

Based on the above, the following theory is put forward, the theory of:

Balancing Human, Social and Organizational Capitals for Software Development and Evolution

Software may be developed and evolved by having different combinations of the components of intellectual capital, i.e., a combination of human, social and organizational capitals. Many different combinations of the capitals may help to solve a given task with a specific objective in a given context. Changes in the task, objective or context may result in the changes in demand of the intellectual capital, or changes in one or two of the components of intellectual capital may force a need to change one or two of the other components, to adjust to the new situation. A balancing of the different components of intellectual capital is needed to ensure that software engineers, teams, or organizations are sufficiently equipped to carry out the task, with a specific objective at hand, in the given context.

Companies strive for finding the right balance, which in a cost-efficient way gives a sufficient level of intellectual capital to carry out the tasks under the given constraints (features, time, cost and quality) with a specific objective in the given context. Too low intellectual capital means that the tasks cannot be carried out adequately, and too much intellectual capital results most likely in the costs being higher than desired. This gives a delicate balance to master for companies developing software.

According to the different types of theories described in Sjøberg et al. (2008), the theory of balancing the ICCs for software development and evolution is primarily explanatory, although it may also help managers to answer “what if”—questions, and hence at least partially help in prediction, or at least in reasoning about the effects of changes. The theory is formulated based on abduction from observations in industry with the objective to capture and help explain the observations. The theory is presented below according to the following steps:

1.Constructs of the theory.
2.Propositions of the theory.
3.Explanations to justify the theory.
4.Scope of the theory.
5.Testing the theory through empirical research.

Sjøberg et al. (2008) proposed these steps as suitable for formulating theories (in software engineering). Steps 1–4 are presented in Sections 3.33.6, followed by a summary of the theory and a discussion about its use in practice. An empirical case study is presented in Section 4 to illustrate the theory, and hence act as a starting point for step 5 above.

3.3. Constructs

The constructs of the theory relate to the building blocks that make up the theory. Thus, the question is: What are the basic elements?

When developing software, it is possible to have different levels of ambition from an organizational perspective (Rajlich and Bennett, 2000); ambition is here used in a general sense. For example, if having an old piece of software that is intended to be phased out shortly and replaced with a new software system, then the ambition of the organization may not be very high. It may be sufficient to keep it afloat and do some corrective maintenance, and it may not be perceived as critical to fix any issues immediately. Thus, the organization has a quite low ambition level. Another example may be when launching a new software system and trying to increase the market share for a specific type of product. In this case, it may be very important to have a high quality product and if problems occur then they should be addressed very quickly. Thus, the ambition level of the organization may be considerably higher than in the first case. This leads to a construct denoted as objective, which relates to the ambition level in terms of performance levels; see Section 3.3.1, where objective is focused on a specific performance level. This leads to performance being the second construct. Meeting the objective is referred to as success, where success in this context refers to the ability to conduct a software development task under a given objective with the intellectual capital available meeting the goals set by the organization. Thus, it is chosen to use “success” in a generic sense given that different organizations may have different criteria for being successful in their software development.

The actual development to be conducted is referred to as the task, which is the third construct for the theory. Some tasks are more challenging than others, and hence the objective should be set in relation to the task to conduct. For example, to work with corrective maintenance is a different task than adding new features to a software system. These tasks may be differently complex to carry out and hence the task to be conducted should be taken into account when deciding how to carry out the development. However, it should be noted that task complexity/difficulty is hard to measure objectively, and in particular the ability to conduct a task is highly dependent on the intellectual capital available. Thus, it is chosen to have task as a construct and not task complexity/difficulty, since the conduct of the task is handled through the ICCs in the theory, see Section 3.4. The task is connected to the objective through the development and evolution levels (performance levels) presented in Section 3.3.1.

To be able to conduct the task with the given objective, the intellectual capital should be carefully considered, in particular given that software development is a very knowledge-intensive discipline. For example, it may be obvious that taking a group of new graduates and letting them form a new team to develop a new feature for an existing large, complex and poorly documented software system may be an overwhelming task for them. This illustrates that a certain intellectual capital is needed to be able to perform the task. As described above, intellectual capital has been categorized into different components by different researchers. Here, it is chosen to follow the division by Youndt et al. (2004), where intellectual capital is divided into: human, social and organization capitals. These ICCs make up three important constructs of the theory. These three constructs are presented in Table 3 and discussed in more detail in Section 3.3.2.

Finally, the mixture of the objective and the task sets the target for the needed intellectual capital. In total, the intellectual capital has to be at a certain level to enable that the task may be performed in relation to the objective set, and if meeting the objective it should be viewed as a success. Thus, the performance is a construct in the theory, since it ties together objective and task with the three ICCs: human, social and organizational capitals.

3.3.1. Performance levels

To describe the objective, five performance levels have been formulated, although in practice the scale is continuous. Furthermore, the levels are qualitative although numbers are associated with the levels to ease the discussion about them and to help in ordering the levels. The continuity of the scale has emerged in discussions with practitioners where it became evident that although being on one performance level (e.g., level 3), they were closer to one of the neighbouring levels than the other. However, discrete qualitative levels were used as a starting point for the discussions with industry. The intention is to capture the ambition of an organization related to desired performance for a given task. An organization is expected to have different ambition levels for different software development projects or products, and it may also vary over time. In Table 2, five performance levels have been defined with five being the highest and most ambitious level, i.e., the organization tries to ensure the level by managing the intellectual capital accordingly. The levels relate to the capability to meet the objectives set by the organization developing the software.

3.3.2. Intellectual capital

As stated earlier the intellectual capital may be described as consisting of three components. The human capital captures the skills, knowledge, expertise and experience of the individuals and unit’s capital. Social capital is concerned with the network outside and inside the unit (e.g., outside and inside the team). The third component is the organizational capital that is the assets in the organization without the people. This includes documentation in relation to the actual software being developed, but also supporting aspects such as processes, tools and culture. These three constructs are divided into areas and specific aspects in relation to each capital as exemplified inTable 3. It should be noted that ICCs are intended to cover all aspects of knowledge available to a company, i.e. it is the “sum of all knowledge” (Nahapiet and Ghoshal, 1998 and Youndt et al., 2004). The sum should not be viewed in mathematical terms; instead it should be seen as a metaphor for balancing the qualitative judgment of the aspects making up the different ICCs.

Table 2.Performance levels.

Level Description
1 It is almost impossible to handle the task, and it takes a long time. The development is more or less in survival mode.
2 It is hard time to handle the tasks. Major problems occur more often than not.
3 The task requires some effort. Occasionally, major problems may occur. In most cases, it works quite smoothly.
4 The task is handled without any major problems.
5 The task is very easy to handle.
Table 3.The three intellectual capital components (ICCs) and examples of their categories and aspects.

ICC Categories Examples of aspects
Human capital Skills and knowledge Technical skills (programming and tools, patterns, basic computer science principles)
Domain knowledge (including understanding of solutions to domain problems)
Software product knowledge (program properties, existing software architecture, concept location within the code)
Knowledge about ways of working (coding conventions, development tools etc.)
Creativity Development of new, innovative ideas
Social capital The unit skills of working together Solving problems together
Making decisions together
Shifting workload
Common goals
Performance of the unit
Sharing knowledge within the unit
Give each other feedback
Knowing what others are doing
Learning from experience
External relations Collaboration with other units
Collaboration with experts
Collaboration with customers
Collaboration with product owners and program managers
Networking through communities of practice
Organizational capital Software Software source code
Software architecture
Documentation Documentation supporting understandability and maintainability of the software
Process documentation
Organization’s culture Stories, rituals that contain valuable ideas, ways of working
General infrastructure Development environment
Knowledge-based infrastructure

The theory is centred around these three ICCs and the processes of balancing them for performance on a software development task under a given objective. In relation to this, several things may be noted:

In any situation involving more than one software developer, all three components are important.

The qualities of each of these components, its categories and aspects for a given unit form the unit’s intellectual capital profile.

For any non-trivial software development task, there is a minimum “sum” of the components, and none of the components adds zero value. Unfortunately, there is no mathematical way of adding ICCs together quantitatively so the sum should be interpreted as a perceived combination of the “values” of the components.

There is a maximum sum of the components.

In normal cases, there is a sum of the three ICCs that is perceived as sufficient for the objective set and the task at hand. This is referred to as the target level to reach when combining the ICCs, which can be achieved through different intellectual capital profiles.

3.4. Proposition

The constructs interact through that a software development organization may set a goal of what to achieve in terms of what software to be delivered (task) and how well and fast it should be done (objective). The ability to reach the goal, i.e. to develop and evolve a software product, and thus the resulting performance is a combination of the objective, task and the sum of ICCs. Thus, the proposition is that the performance is a result of the objective, the task and the sum of ICCs. Fig. 2 illustrates the theory. The task is in the centre, and it is going to be performed with a given objective (desired level of performance) using the intellectual capital available. The objective sets the expectations on the conduct of the task, i.e. in terms of scope, quality, time and cost. The ICCs taken together facilitate the conduct of the task in solving it with respect to the objective. The outcome is a performance, which should be compared with the objective of the organization and based on their view of being successful in software development. The reasoning could be compared with requirements coming into software development (objective) and the available resources to implement the requirements (available intellectual capital). Performance is the outcome in terms of the objectives set by the organization.

Fig 2
Fig. 2.

Illustration of the theory proposition.

Thus, Fig. 2 illustrates how the goal in terms of objective and the task and its difficulty require a certain combination of ICCs to reach the target. This is further illustrated in Fig. 3. To the left in Fig. 3, the starting point is the desired level of performance. The scale is in reality continuous, but for reasons of approximation and simplicity, it has been chosen to use five performance levels as described in Section 3.3.1. Here, the levels are not shown, since the objective is to illustrate the relationships rather than to describe a real case. The performance levels are used in the discussion of the usage of the theory inSection 3.6 and in the actual case described in Section 4. Once the performance level is set, the difficulty of the task has to be judged, and depending on the difficulty and the objective a certain sum of intellectual capitals is needed as shown with the arrow going from performance to intellectual capital. The needed intellectual capital becomes the target to be able to perform the task. If exactly meeting the target intellectual capital by balancing human, social and organizational capitals respectively, then the actual performance for the task at hand is equal to the objective. Fig. 3 illustrates the principal relationships, and some scenarios are discussed below to further illustrate the theory about the balancing of the different intellectual capitals to achieve the intended performance (objective for the task).

It is worth noting that the angle of the arrow going from performance to intellectual capital, and the arrow going back from intellectual capital to performance, will have the same angle, although not necessarily being in the same place as illustrated in Fig. 4 below. The actual angle of the arrows between performance and intellectual capital is given by the difficulty of the task. It should be noted that the difficulty of the task does not imply whether the arrow should go up, down or be on the same level. An easier task will of course have a lower requirement on the intellectual capital than a more challenging task. However, the arrow from performance can still go down for a challenging task, since the starting point for the arrow depends on the objective (in terms of performance level) and not the task as such.

Fig. 3 illustrates how it is possible to set an objective in relation to the performance levels introduced in Section 3.3.1. Given the objective and then taking the task into account, a certain target level of the intellectual capital is set. The target level indicates the level of intellectual capital needed to perform the task under the given objective. Given that the intellectual capital may be described in terms of three components: human, social and organizational capitals, the challenge is then to identify a combination of the ICCs or a suitable ICC profile that in total gives the intellectual capital needed.

Different scenarios may occur as illustrated in Fig. 4. If having too little intellectual capital (Fig. 4(a)), then the development and evolution will be challenged in relation to either fulfilling the task or reaching the objective in terms of performance levels. Fig. 4(a) illustrates this situation where the objective and task taken together point to a targeted intellectual capital (combination of human, social and organization capitals), which is higher than actually having. This is shown by the intellectual capital being lower than the target (illustrated with the arrow denoted “acquired sum of ICCs”, and hence the performance will not be in accordance with the objective for the task to be conducted.

On the other hand, if having too much intellectual capital (see Fig. 4(c)), the development and evolution may go easier than required, which may be good, but it may result in being a too costly solution. For example, it may be too costly in terms of having too many highly qualified people with long experience in the unit (human capital higher than the needs), investing too much in evolving or maintaining the social network (social capital higher than the needs), or putting too much effort into documenting well-documented software or refactoring well-structured software (organizational capital than the needs). In Fig. 4(c), it is shown how the intellectual capital becomes higher than the target, i.e. the acquired sum of the ICCs. Thus, the actual performance becomes higher than the objective. In this scenario, it is possible to consider lowering the total intellectual capital or go with having an expected performance that is higher than the objective.

Fig 3
Fig. 3.

A summary of the theory constructs and the proposition of the interaction.

Full-size image (27 K)
Fig. 4.

(a) Targeted ICCs sum not reached. (b) Targeted ICCs sum reached. (c) Targeted ICCs sum is overreached.

In Fig. 4(b), a scenario is shown where the acquired sum of the intellectual capital is equal to the targeted. Thus, in this situation the intellectual capital matches the needs given through the objective and the task. At the same time, it is important to not only optimize the intellectual capital in relation to the current situation (Fig. 4(b)), but also plan for the future needs. The latter should be captured in the task when it is formulated.

The balancing of ICCs implies that a certain human capital may require a certain level of organizational capital for reaching the target, while the same level of organizational capital may be deemed insufficient for a different human capital. For example, developers with less experience in the software will most likely need better documentation of it than those having worked a long time with the software. Furthermore, social capital plays an interesting role since it may facilitate the development of intellectual capital (Nahapiet and Ghoshal, 1998), for example, good networking with experts outside a unit facilitates learning, the human capital may increase accordingly, and hence increasing the intellectual capital as a whole.

3.5. Theory justifications

The theory is justified through its importance. It provides both practitioners and researchers with a terminology to reason about the relative importance of different ICCs. Furthermore, practitioners may use the theory to profile their units, and to reason about how changes in one ICC may be compensated by improving the same ICC or at least one of the others. Alternatively, it is possible to judge the consequence of changes in intellectual capital profiles using the theory as a basis for reasoning. The theory makes the relationships between ICCs explicit for software engineering.

The theory is based on industrial observations and logical reasoning. As indicated inSection 3.4, it is quite evident that a newcomer to a software development project would rely more on the organizational capital and the expertise of others (social capital), than someone who has been involved in the development of the software over a long period of time. Furthermore, it is no surprise that having a more difficult task and setting, e.g., higher goals in terms of performance (objective) will result in a need for a higher intellectual capital, and thus a higher sum of ICCs than having a very simple task and a lower ambition, e.g., due to that the software is going to be phased out any way.

3.6. Scope of theory

The objective is that the theory is applicable for all types of software development and evolution in which more than one individual is involved. Thus, the theory is not targeting one-person projects or trivial software development. This is also discussed inSection 3.3.2. The challenges of balancing the ICCs are independent of, for example, the type of software being developed; the development approach used or project constellations (single- or multi-team projects). The theory is hence general for software development and evolution.

3.7. Usage of theory

The theory may be used in several different ways from a practical management perspective. Based on the experience from working with industry (Wohlin et al., 2012) in general, and in particular the research in relation to global software engineering in close industrial collaboration that may be exemplified with the work related to software transfers as reported by Šmite and Wohlin (2012), it is clear that managers in practice do balance different components of intellectual capital. It may not be done explicitly in these terms, but based on experience, expertise and common sense. However, the formulation of the practice as a theory helps managers to make the importance of ICCs and relations between ICCs explicit. The theory systematizes and explicates the common industrial practice, and it will help managers to reason about these issues and also make it easier to communicate the tacit knowledge of an experienced manager. Furthermore, it makes the relationships between different components of intellectual capital explicit so that software engineering researchers better can understand how their research may contribute to industry practice.

The theory will, for example, help managers in relation to answering questions such as:

1.   Where are we?

Managers could reflect on the current performance achievements. By reasoning about the current objective and the difficulty of the task, it is possible to then judge the targeted situation in terms of the different ICCs. Reflection on the sufficiency of the actual ICCs would then explain the performance level. If the situation is not satisfactory actions may be taken, either if being below the target when looking at the sum of the intellectual capital or if being substantially above the target. In the latter case, the manager may choose to pull out some experts to put on another project.

2.   Where will we end up (without actions)?

It is possible to conduct a consequence analysis and reason around the different ICCs. The manager may have a current situation and some change is foreseen or planned, and hence the manager could estimate the consequence of the change. For example, if planning to transfer development from one site to another site, the intellectual capital will most likely change and hence actions may be taken to mitigate this, including improving the organizational capital or moving an expert with the software development for some time (social capital) to work with knowledge transfer (strengthening the human capital). Experiences of similar changes documented according to the theory concepts might help to deal with such consequence analyses.

3.   Where do we want to be (what is the target)?

It is also possible to use the theory to explicate where the development ought to be, i.e., which is the target? The manager may choose different actions to ensure that the target is met. In a given situation with a certain objective and with some tasks at hand, the manager could ensure that there is sufficient intellectual capital to meet the target. The manager could also reason around different alternatives to reach the target, i.e., which ICCs could be most cost-efficiently changed to meet the target?

In summary, the theory makes the tacit knowledge of managers more explicit and it supports managers in their reasoning around the complex challenges related to software development and evolution. The manager is able to reason around the function and dependencies between an objective, a task and the different ICCs, or change the target to something more realistic under the given circumstances. Changing the target may imply either accepting a lower ambition (objective) or simplifying the task if it is deemed impossible to find a cost-efficient solution when it comes to the combination of the ICCs that meets the current target. Furthermore, the theory helps software engineering researchers better understand the relationships between different intellectual capitals and hence put their own research in a larger context.

An illustration of the usage of the theory can be found in Fig. 5. Assume that the organization is prepared to aim for level 3 in terms of the development and evolution levels with the given task (see Section 3.3.1 for the different levels). Level 3 implies that tasks are handled with some effort and occasionally major issues appear that have to be solved. The software developers are not struggling, but they are definitively challenged occasionally. The objective and the task set the target for what to achieve. Given the target, the manager can now look at the intellectual capital available and reason about strengths, weaknesses and different options to reach the target in a cost-efficient way.

Fig 5
Fig. 5.

An illustration of the use of the theory.

In the example in Fig. 5, it can be seen how the manager judge that the software developers have reasonably strong human capital, and the organizational capital is also quite good. The manager judges that the weakest ICC is the social capital. However, in total, the three components should be sufficient to reach the target (shown as perceived intellectual capital). As development goes on the manager may monitor the progress and evaluate whether the judgment is correct. If it turns out that, for example, the organizational capital has been overestimated and in reality the combination of ICCs does not reach the target, the manager now has an explicit mental model of the situation and could discuss actions to address the concerns hopefully more easily (shown as actual intellectual capital). The manager may evaluate different alternatives, i.e. improvement actions, to address the concerns that it seems like the target is not met. The inability to meet the target may show through that it seems like the development is rather on level 2 than as intended on level 3. Thus, the manager may either accept the situation or lower the target, or maybe the development tasks can be changed or the intellectual capital has to be strengthened to ensure that the target is met with the current objective and the development task assigned. Independently, the formulation and illustration of the theory give the manager an explicit framework for conducting a root-cause analysis for performance gaps, reasoning about the balancing of the ICCs, as well as a way of communicating why certain decision are made.

In this section reasoning regarding the usage of the theory is provided, while a practical illustration of how the theory can be observed in many of the decisions taken in a software development project is provided in the next section. The case presented includes a transfer of software development from one development site to another development site within a company, as well as other organizational changes, such as merging two business units, scaling up the number of development teams and distributing development or related components.

4. Empirical case

In this section, a case that illustrates a potential use of the theory and how to operationalize the theoretical constructs and propositions is described.

4.1. Research design

Empirical cases are used in the theory-building process for examination of the validity of theories (Sjøberg et al., 2008). Besides the validation of the predictive and explanatory powers of a theory, empirical studies can help testing the ability to operationalize the theoretical constructs and propositions. Having said that, validation is best conducted by others to avoid researcher bias, and hence the case presented is focused on the operationalization of the constructs and propositions put forward in the theory. The case study was designed as an exploratory study (Yin, 2009) to investigate the interplay between the human, social and organizational capitals, as well as its relation to the organization’s ability to develop and evolve software products. Thus, the case study was not originally designed to illustrate the theory. However, given that the constructs in the theory were used in the case, it became a good case to illustrate the theory as such. In particular, the empirical research was designed to explore the following questions:

How do developers evaluate their intellectual capital profile of the unit they work in, in relation to the assigned tasks? Are there any events that change the intellectual capital profile during the product evolution?

How do developers rely on different components of intellectual capital in relation to the assigned tasks? Does it change in different phases of product evolution?

How do developers perceive their performance in relation to the assigned tasks? Does it change in different phases of product evolution?

The researchers used open-ended questions to explore the phenomena later used in the theory, and sought explanations behind the relationship between the performance and the intellectual capital profiles.

The empirical case described in this article has not been previously reported.

4.2. Context, data collection and analysis

The context of the case study is a multinational software company (below referred to as “the company”) and the study object is the evolution of a relatively small sub-system (∼100 KLOC) of a compound software system. The sub-system has been transferred from one site of the company to another site belonging to the company—the event investigated as the major event with the strongest impact on the intellectual capital profile of the staff involved in the development. The history of the product evolution is illustrated in Fig. 6.

Fig 6Fig 6
Fig 6.

Product history.

In this article, the data collected (see Table 4) are used to illustrate the applicability and validity of the theory constructs and propositions. First, individual interviews were conducted with different representatives to capture the questions related to product history, development process and environment, and gathered some observations from visiting the onshore site of the company. In the next step, focus group discussions were held with the Swedish and Indian development teams involved in the evolution to elicit the perceived value of their intellectual capital profiles, reliance on different components of intellectual capital and perceived performance. The unit selected for profiling according to Table 3 was the software teams developing the sub-system (two Swedish teams before the transfer, and two Indian teams after the transfer), and hence their internal collaborative skills are referred to as teamwork skills.

Table 4.Empirical data collection.

Method Number Duration Participants Timeframe
Formal interviews 1 1 h Transfer manager from Sweden (shortly after the transfer) October 2012
6 1.5 h Swedish developers, an architect, a product owner, a tester May–June 2013
Survey on knowledge transfer 13 Swedish participants (after the transfer) November 2012
8 Indian participants (after the transfer) November 2012
Group interviews 1 1.5 h 4 team members from 2 development teams in Sweden October 2013
1 2 h 6 team members from 3 development teams in India (conducted via a video-conference) September 2013
Follow-up interviews 2 1 h Release manager and a product manager from Sweden October 2013
Table 5.Starting IC profile of the Swedish teams.

ICC Categories Evaluation Reliance on ICCs Performance
Human capital Skills and knowledge Strong Human capital 2
Creativity Strong
Social capital Teamwork skills Medium
External relations Weak
Organizational capital Software Weak
Documentation Medium
Organizational culture Strong
General infrastructure Medium

The data generated by the focus groups contained:

Categorization of ICC aspects from Table 3 (into three groups: strong, medium and weak), which formed an IC profile:

Human capital, including Skills and knowledge.

Social capital, including Skills of the unit in terms of ability to work together, and External relations.

Organizational capital, including Software, Documentation, Organizational culture, and General infrastructure.

Events that influenced the ICCs, actions that organizations took to balance the ICCs, and consequent changes in the intellectual capital profiles.

Analysis of reliance on different components of intellectual capital, in which the participants determine the importance of different components (human, social and organizational) during different stages of evolution (before and after the identified events),

Perceived performance in different stages of evolution (before and after the identified events) using the five performance levels listed in Section 3.3.1.

In Section 4.3, the data gathered in the case study are reported and discussed in the light of the theory constructs.

4.3. Balancing intellectual capital components in practice

Official start of sub-system development: In the beginning, the two Swedish teams that developed the sub-system characterized their intellectual capital (in the group interview) by strong human capital, medium teamwork skills, but weak relations with the stakeholders external to the development teams, and organizational capital with a variety of strong, weak and medium characteristics (see Table 5). In relation to the tasks the performance level determined by the teams as low (level 2—the teams had a hard time to handle the tasks and major problems occurred more often than not). As one of the developers characterized—“It was a one large chunk of code. It was hard to work with it”. Since the organizational capital related to the software was weak, external relations of the teams were weak and documentation and teamwork were medium, the developers relied primarily on their skills (human capital), which deemed to be insufficient to reach high performance.

Table 6.Mitigating actions before the transfer.

Actions Comments Affected ICCs


Reliance Performance
and aspects (cf. Table 3)


on ICCs changes (cf.Table 2)
First refactoring Action to mitigate gaps in org. capital + OC: Source code Human capital 2 → 3
+ OC: Software architecture
+ SC: Collaboration with experts
Integration of the two Swedish teams Action to mitigate gaps in social capital ++ SC: Collaboration with POs and program managers Social capital No change
+ SC: Solving problems together
+ SC: Making decisions together
+ SC: Shifting workload
+ SC: Common goals
+ SC: Performance of the unit
SC: Give each other feedback
SC: Knowing what others are doing
Second refactoring Action to mitigate gaps in This refactoring was cancelled due to the subsequent transfer of the sub-system to a new group of
org. capital developers that did not have enough human capital to finish the refactoring.
Notations used for effect illustration: a small increase: +, a large increase: ++, a small decrease: –, a large decrease: ––.

Several mitigating actions were taken to improve the performance as described below (see also Table 6).

First refactoring—Action to mitigate gaps in organizational capital: In order to improve performance, it was decided to organize refactoring, which targeted the source code structure and hence readability, implementation of coding conventions, and a few architectural improvements. Thus the organizational capital increased and the perceived value of these efforts could be observed by the raise in performance from level 2 (the developers have a hard time handling their tasks and major problems occur more often than not) to level 3 (the tasks require some effort and occasionally, major issues may occur, but in most cases, it works quite smoothly). As a developer described: “It [refactoring] helped a bit but it could have been much more. But there was no time for that. It is still one big chunk of code.” Additionally, more frequent collaboration between the team and the architect (not in the team) was reported as a positive side effect of refactoring.

Integration of the two Swedish teams—Action to mitigate gaps in social capital:The disintegration of the two teams was addressed by organizational changes. Because the teams from the beginning belonged to different managerial structures, there was very little mutual collaboration. Although the teams were located in close proximity (neighbouring workspaces), they received the tasks from their respective product owners, who occasionally had conflicting priorities. The integration of the two teams under one management was done to avoid the coordination overhead. When the teams were united, it had a positive side effect on the collaboration and interaction with other roles outside of the teams. The cooperation between the teams improved and so did the collaboration with the product owners and program managers, who now represented a joint interest. Nonetheless, the teams did not associate these improvements with any significant changes in performance.

Second refactoring—Action to mitigate gaps in organizational capital: The main product architect initiated a second refactoring to improve the software architecture and further improve the performance. Unfortunately, this program was cancelled in the light of the new organizational changes, i.e., a transfer of the sub-system to India, which was a strategic management decision not announced to the development level beforehand. As one manger explained the reason for the cancellation: “We also started another refactoring that was cancelled because when we heard news that we are transferring we realized that it is better to do the refactoring on the receiving side than doing a lot of changes and stopping half way and handling over half of the work and letting them continue”. This meant that the original developers who had the human capital to raise the level of the organizational capital did not manage to implement the needed improvements and make it strong, before it was too late. Notably, the teams that received the further evolution of the sub-system were not ready to perform a major refactoring, and thus the improvement program was delayed for several years.

In Table 6, the evolution of product development is illustrated through different events and actions (column 1), changes brought about on the ICCs (column 3), developers’ reliance on different ICCs in the light of the events and actions (column 4) and perceived performance (column 5). Additionally, we comment on the intentions of the organization in relation to the events (column 2).

Table 7.IC profile of the Indian teams after the transfer.

ICC Categories Evaluation Reliance on ICCs Performance
Human capital Skills and knowledge Medium Organizational capital 2
Creativity Medium
Social capital Teamwork skills Strong
External relations Medium
Organizational capital Software Medium
Documentation Weak
Organizational culture Strong
General infrastructure Medium
Table 8.Mitigating actions after the transfer.

Actions Comments Affected ICCs


Reliance Performance
and categories


on ICCs changes
Assignment of less complex tasks Action to mitigate task difficulty No change Organizational capital 2 → 3
Further investment into documentation Action to mitigate gaps in organizational capital + OC: Product documentation Social capital No change
+ OC: Process documentation
Gain in working experience Growth of human capital over time + HC: Domain knowledge Social capital 3 → 4
+ HC: Software product knowledge
+ HC: Creativity
Notations used for effect illustration: a small increase: +, a large increase: ++, a small decrease: –, a large decrease: —.

Transfer: Due to a shortage of resources in the Swedish site and new upcoming projects, it was decided to transfer the sub-system to India. A transfer means a relocation of the sub-system from the original developers, i.e., the two Swedish teams, to the new developers, i.e., two Indian teams. After a transfer, new people are working on the product, and hence the level of human and social capital components will change. Thus a transfer means that the intellectual capital profile needs to be updated based on the intellectual capital of the new developers being responsible for the software. The managers anticipated a decrease of the human capital as a result of the transfer, and took preventive actions to compensate the unavoidable gaps. One developer explained why they started cleaning up the code: “Since we knew about the transfer we tried to clean up these things what we are working with as good as we can. So, it would be easier for others to understand.” Furthermore, it was decided to choose the destination of the transfer to an Indian site within the same company, which already had experience and expertise within the product domain (similar products) and transfers. Employment of developers working on related products resulted in them obtaining important domain knowledge, which ensured a certain level of human capital. Their joint work experience ensured ability to leverage on the social capital in terms of teamwork skills. Some of the Swedes also already knew people on the receiving side in India, which gave a positive impact on the social capital (seen from India). As a system manager explained: “In this case [a member of a team] knew some of the people already. He had transferred a product to the same site before. … We knew each other. At least on the top level”.

Even though preventive actions were planned the performance decreased. As the architect explained: “then a lot of new guys came in and they were not doing any work at the start”. To be able to improve fast, the new developers were also supported by improved documentation supporting the product and the processes. While the Swedish developers did not depend on the documentation and thus many documents were outdated, the transfer meant that the organizational capital in terms of the software and documentation would become crucial as the prime source of reliance for the new developers. However investments in documentation, although important, will never be enough to avoid a decrease in performance. As the architect explained: “Some of the changes they [Indian developers] made affected [a specific feature], for example, and they spent a lot of time fixing [that feature] before they could commit the changes. That of course, is a mistake you make when you are new to the product. You do not really know what you can do and what you cannot do without affecting, for example, performance [of the system]. That comes with the experience. This is something that you really cannot document; you have to get to know the product and how to do the stuff”.

Finally, to ensure the necessary access to expertise a few Swedish developers were partially devoted to support the new Indian teams, and the main product architect was relocated for a half year long onsite support in India to answer questions and act as the safety net for the new team. While this did not raise the human capital in the two Indian teams, it ensured the leverage on the social capital through availability of experts, which helped keeping the level of performance. One Swedish expert explained how he helped out solving problems and tried strengthening the organizational capital: “They [Indian developers] had some issues […] then I stepped in and helped. Otherwise, I was talking about the next step: what could you do to improve the product and giving them tips for, for example, refactoring”.

As a result of having Swedes available for the Indian teams, the new development unit consisting of two teams in India has the IC profile described in Table 7. Since the two Indian teams received existing software for further development, they had no product knowledge and only medium domain knowledge. Hence the teams primarily relied on the organizational capital. The first months after the transfer the new site climbed the learning curve building their understanding and knowledge of the product (human capital). Interestingly, despite the improvements, the Indian developers perceived the parts of organizational capital related to documentation to be weak and not sufficient to rely on when performing the development work. The transfer evidently resulted in deficiencies in comparison with what was achieved by the Swedish teams, and the resulting performance was on level 2 again.

Several mitigating actions and improvements improved the performance of the new development unit after the transfer (see Table 8).

Assignment of less complex tasks—Action to mitigate task difficulty: To alleviate the problems with performance, the Indian developers were assigned less difficult and critical tasks and more minor product improvements while climbing the learning curve. The Indian teams reflected that this improved their performance to level 3. However, the tasks still required some effort and occasionally, major problems occurred.

Further investment into documentation—Action to mitigate gaps in organizational capital: The deficiencies in documentation were targeted by a continuous improvement program, which facilitated the learning too. The documentation was perceived to be improved from being weak to a medium level. However, this did not have any impact on performance, therefore further improvements were planned. As an Indian developer explained: “Slowly, slowly we improved the documents. New documents were created too. – Now it is OK, but we need to improve more.”

Gain in working experience—Growth of human capital over time: As the product and domain knowledge grew, the tasks could be handled with less effort. An Indian designer commented in the group interview: “It is still the same teams. Not a single person has left. The work is interesting. We are growing, we rotate people in different tasks, we are able to balance the workload, and we all sit together”. The Indian teams said to have gained creativity and performance improved to level 4, when the simple tasks were handled without any major problems.

4.4. Discussion of the illustrative case study

The ICCs and their assessment scales were defined for the case study, as well as the scales for assessing performance. In practice it was observed that the case study subjects could easily understand the concepts, and that ICCs are all relevant for them. Furthermore, the performance evaluation did not cause any difficulty. However, the assessment of the ICCs into strong, medium and weak, and the strength of the impact of certain events on the intellectual capital profile were subject to many questions and disagreements. Thus, how to actually assess the performance should be further evaluated and in particular whether it is possible to measure actual outcome and not only the perception of the participants.

In the illustrative case above several events and actions were studied. A transfer that had a profound negative impact on the intellectual capital profile led to a decrease in performance. The case study illustrates how changes in different ICCs change the performance. Notably, some of the mitigating actions that were implemented to increase the performance through improvements of different aspects of the ICCs did not increase the performance. This means that not all positive or negative changes in the level of intellectual capital will have immediate impact on performance, and that the changes shall be substantial. Notably, the case study also illustrates that the theory has its limitations. There is no mathematical way of calculating the value of each ICC and expressing their combination quantitatively, the theory is unable to clearly explain exactly why certain changes in one or several ICCs are sufficient to improve performance, and why others do not. However, the theory helps to explain and reason about performance after major events (the transfer and changes in task difficulty). It is noteworthy that the theory is on a general level and hence it limits its predictive capabilities. More fine-grained theories and models are needed to be able to make predictions based on actual changes made in a specific context.

Due to the inability to make accurate predictions, the theory is limited in terms of exactness, but it helps in explaining and better understanding the relationships between different key components in the engineering of software. In other words, the changes to ICCs and task difficulty in different contexts should be carefully judged and in the long-term help in improving the predictive power, although the predictive power of the theory will be highly context-dependent.

5. Discussion

It may be observed that the three components of intellectual capital relate to education and research in software engineering as well as organizational specific aspects that cannot be taught directly at the university. In summary, in software engineering, education is primarily focused on the human capital, software engineering research is primarily aimed at organizational capital and the social capital has to be gained by interaction of individuals and to a large extent is influenced by the context in which the individuals develop their professional career.

5.1. Human capital

The main focus of most university education is to increase the human capital of the students and, hopefully, makes students aware of the need for social capital. The human capital is built through courses at the university as well as the lifelong learning of individuals as humans make a career and obtain different experiences and expertise. Through education and lifelong learning, humans do increase their general experience, knowledge and competence. Specific knowledge, for example, related to a specific domain (such as telecom or process automation), product, system or service to a large extent should be acquired at the workplace.

5.2. Social capital

The social capital is not necessarily taught directly at universities, although students often implicitly become well aware of the need to have good contacts with fellow students and the faculty. Most students leverage on their contact network throughout their studies, for example, by knowing which fellow student to discuss certain courses with and so forth. Furthermore, the social capital is a natural part of project- and team-oriented learning, which is suggested as a complementary responsibility for an educational curriculum. This shall make students understand the importance and the need for social capital when developing software. However, the social capital is very much context-dependent, and hence the social capital is primarily built from the current work. From an educational point of view, the key with respect to social capital is to make students aware of its importance, while their actual social capital will be highly dependent on their future workplace. Notably, training mechanisms exist to improve the social capital of development teams at work. Certain development approaches (such as agile software development) and development practices (such as pair programming, daily meetings and review meetings) foster frequent networking and extensive interaction inside the development teams. Furthermore, communities of practice and participation in different forums foster networking across development teams and units. Organizations that have gaps in human or organizational capital shall take into consideration investments into social capital that can become the source of competitive advantage.

5.3. Organizational capital

The organizational capital is largely addressed by software engineering research, i.e., research targets providing better processes, methods, techniques and tools to support software development. However, the formulated theory implies that software engineering research should take both human aspects as well as social aspects into account. An example of the former is the need for different types of empirical studies with respect to new ideas emerging from research. For example, new tools developed by a PhD student should be properly evaluated by humans and not only proposed, i.e., new tools ought to become part of the human capital and not only a potential organizational capital. Otherwise there is a risk that tools developed as part of research projects end up on the shelf. Thus, the research complements the educational responsibility of a university in a natural way.

6. Conclusions

From empirical observations in industry as described in Section 3.1, it is concluded that industry does balance the components of intellectual capital, i.e. human, social and organizational capitals respectively. In practice frequent changes such as restructuring, retirements, transfers, as well as technical product evolution, continuously challenge the companies’ abilities to reach the development objectives and performance. This article packages the observations from industry into a general theory for software engineering. The theory captures the technical aspects of software development through the concept of organizational capital. It acknowledges that software engineering is a human- and knowledge-intensive discipline by including human capital. Furthermore, challenges related to scalability and complexity of software systems make it impossible for a single individual to handle a system of any reasonable size. Development of these systems requires a combination of expertise and experience, and hence interactions between individuals. This is captured in the theory through the inclusion of social capital. The theory could be used by industry to reason about different options when it comes to having a sufficient intellectual capital in a given situation, and by researchers to improve their work in a larger context, i.e. the ICCs. Given the general nature of the theory and the diversity under which software is developed, the theory as such is not aimed at predicting outcomes based on changes in any of the three ICCs. Thus, more fine-grained theories and models are needed to obtain a predictive capability. The proposed theory is focused on understanding, explaining and reasoning about the relationships between human, social and organizational capitals.

It should be noted that the theory emphasizes the importance of intellectual capital in software engineering. It helps to realize that staffing projects is not a straightforward task, and is not only a matter of ensuring individual skills. It is a relationship between the task (its nature and difficulty), and the balance and dynamics between the three ICCs.

The general theory is formulated as a balancing of the three ICCs: human, social and organizational capitals respectively. The theory is firmly based in industry practice, and constructs and propositions have been formulated to structure and systematize the often implicitly handled balancing conducted in industry. The theory helps by providing an explanatory power of the observations in industry, and it may work as a tool to also in general reason about consequences when changing the intellectual capital profile.

Further research is needed in particular others have to test the theory’s usability in other settings than those available to the authors of this article, and to find ways to evaluate the theoretical constructs, and hence the theory as a whole. Thus, the further operationalization of the theory still remains. Furthermore, the theory points to the need for software engineering research and education to preferably take all three components of intellectual capital into consideration both when developing new solutions and evaluating them, and when teaching software engineering.

Acknowledgments

We are thankful to Dag Sjøberg for his useful advice that supported us in describing the formulation of the theory. The Knowledge Foundation, Sweden funds the research through the TEDD (Technical Excellence in Distributed Development) Project (grant no.20120200). The research is also supported by the Smiglo project, which is partially funded by the Research Council of Norway under the grant 235359/O30.

Vitae

Claes Wohlin received the Ph.D. degree in communication systems from Lund University in 1991. Currently, he is a professor of software engineering and dean of the Faculty of Computing at Blekinge Institute of Technology, Sweden. He has previously held professor chairs at the universities in Lund and Linköping. His research interests include empirical methods in software engineering, software process improvement, software quality, and global software engineering. He was the recipient of Telenor’s Nordic Research Prize in 2004, and a member of the Royal Swedish Academy of Engineering Sciences since 2011. He is editor-in-chief of Information and Software Technology published by Elsevier.

Darja Šmite received her Ph.D. degree in computer science in 2007 from the University of Latvia. Currently, she is an associate professor of software engineering at Blekinge Institute of Technology in Sweden, where she leads the research efforts related to the effects of offshoring for Swedish software-intensive companies. She is also a visiting professor at University of Latvia. Her research interests include global software engineering, large-scale agile software development, and software process improvement.

Nils Brede Moe works with software process improvement, agile software development and global software development as a senior scientist at SINTEF Information and Communication Technology. His research interests are related to organizational, socio-technical, and global/distributed aspects. His main publications include several longitudinal studies on self-management, decision-making and teamwork. He wrote his thesis for the degree of Doctor Philosophiae on “From Improving Processes to Improving Practice —Software Process Improvement in Transition from Plan-driven to Change-driven Development”. He is also holding an adjunct position at Blekinge Institute of Technology.

[Ref] The influence of developer multi-homing on competition between software ecosystems

Authors: Sami Hyrynsalmia, , Arho Suominenb, Matti Mäntymäkic,

Article Ref: doi:10.1016/j.jss.2015.08.053

Abstract

Having a large number of applications in the marketplace is considered a critical success factor for software ecosystems. The number of applications has been claimed to determine which ecosystems holds the greatest competitive advantage and will eventually dominate the market. This paper investigates the influence of developer multi-homing (i.e., participating in more than one ecosystem) in three leading mobile application ecosystems. Our results show that when regarded as a whole, mobile application ecosystems are single-homing markets. The results further show that 3% of all developers generate more than 80% of installed applications and that multi-homing is common among these developers. Finally, we demonstrate that the most installed content actually comprises only a small number of the potential value propositions. The results thus imply that attracting and maintaining developers of superstar applications is more critical for the survival of a mobile application ecosystem than the overall number of developers and applications. Hence, the mobile ecosystem is unlikely to become a monopoly. Since exclusive contracts between application developers and mobile application ecosystems are rare, multi-homing is a viable component of risk management and a publishing strategy. The study advances the theoretical understanding of the influence of multi-homing on competition in software ecosystems.

Keywords

  • Software ecosystem;
  • Multi-homing;
  • Two-sided markets

1. Introduction

Competition in the mobile communication industry has been argued as turning from “a battle of devices to a war of ecosystems”.1 Hence, the sheer number of applications in the marketplace has become increasingly important in marketing new mobile devices (see e.g., Chen, 2010, Reuters, 2012, Lee, 2015 and Smith, 2015). All leading mobile operating system providers have established application marketplaces such as Google Play, App Store by Apple and Microsoft’s Windows Phone Store (previously Windows Phone Marketplace) with the aim of enticing a large number of content providers (e.g., application developers) in order to create their mobile application ecosystems. The logic behind establishing the ecosystems is grounded on the theory of network externalities (Katz and Shapiro, 1985). Due to network externalities, a large number of application developers within the ecosystem is expected to lead to a large number of applications that, in turn, will attract customers and drive device sales, leading to a virtuous circle (Holzer and Ondrus, 2011).

In this study, the concept of ‘mobile application ecosystem’ refers to “an interconnected system comprising an ecosystem orchestrator, mobile application developers, and mobile device owners, all of whom are connected through a marketplace platform” (Hyrynsalmi, Seppänen and Suominen, 2014). Hence, a mobile application ecosystem is a derivate of the more general concept of a ‘software ecosystem’ (Jansen et al., 2009,Bosch, 2009 and Manikas and Hansen, 2013).

The emergence of ecosystems has increased the complexity of revenue models, but also cooperation, competition and co-opetition between and within the ecosystems. The traditional value chain approaches (Porter and Millar, 1985 and Porter, 2004), employed to describe the telecommunications industry (Barnes, 2002, Maitland et al., 2002 and Funk, 2009), have been increasingly replaced by ecosystem approaches (Basole and Karla, 2011, Basole and Karla, 2012 and Basole et al., 2012).

The increased complexity calls for a better understanding of the boundaries and structures of the ecosystems (e.g., Jansen et al., 2009, Gueguen and Isckia, 2011 and Hanssen, 2012). Prior research has investigated the success factors of the iPhone (Laugesen and Yuan, 2010 and West and Mace, 2010), the distribution and capture of value in the mobile phone supply chains (Dedrick, Kraemer, and Linden, 2011), developers’ perspectives on the mobile application markets (Lee et al., 2010,Holzer and Ondrus, 2011 and Schultz et al., 2011), the dynamics of the application marketplaces (Järvi and Kortelainen, 2011, Hyrynsalmi et al., 2013 and Jansen and Bloemendal, 2013), standard wars and platform battles (Heinrich, 2014, Gallager, 2012,van de Kaa and de Vries, 2015 and van de Kaa et al., 2011) and cooperation within ecosystems (Gueguen and Isckia, 2011). However, there is a dearth of theoretically grounded literature offering foresight on the competition between software ecosystems that could guide mobile application developers to optimize their publishing strategies.

To fill this void in the literature, this study draws on theory about platform competition (Rochet and Tirole, 2003, Armstrong, 2006 and Sun and Tse, 2009) and investigates application developers’ multi-homing (i.e., the situation in which developers publish applications in two or more ecosystems) as well as the content of the most downloaded applications. According to the extant research, the success of network platforms, such as mobile application ecosystems, is determined by whether the market is single-homing or multi-homing in terms of volume (Sun and Tse, 2009). In other words, if application developers prefer to offer their products in one ecosystem, i.e. single-home, the market as a whole will, over time, develop into a monopoly of the leading ecosystem.

To gain a more accurate insight into the competition between software ecosystems, we advance the research on the influence of multi-homing on platform competition in two-sided markets (Sun and Tse, 2009). Software ecosystems are two-sided markets since two groups of agents, e.g. consumers and application developers operate in the market. Second, we contribute to the research on competition dynamics in the telecommunications industry (He, Lim, and Wong, 2006). Our point of departure is that, because only a small number of all applications available in the ecosystems are actually actively used by customers, consequently only a small number of all developers generate the majority of downloads. Thus, we particularly focus on the role of this group of developers that we define as ‘nucleus developers’ since they have a central role in the success of an ecosystem. Hence, we shed light on the bargaining powers of the nucleus developers and ecosystem orchestrators such as Apple, Google, and Microsoft that host and maintain the ecosystems ( Manikas and Hansen, 2013).

Against this backdrop, we empirically study more than one million applications from all three mobile application ecosystems, examining the level of multi-homing at the levels of the (1) mobile application ecosystem and (2) nucleus developer. We use web crawling to collect the data, and string matching algorithms to pair applications and developers of different ecosystems. We then move to examining how the dynamics of multi-homing change by analyzing the nucleus developers to determine whether they are particular to multi-homing and, thus, less dependent on a single ecosystem orchestrator. Finally, we investigate the content of the most successful applications and show that the content, i.e., the value propositions of the most popular applications can be classified into a relatively small number of categories.

Our results demonstrate that just three percent of the developers generate more than 80% of all installed applications. In addition, the results show that when regarded as a whole, only a small subset of application developers multi-home. However, among the nucleus developers, multi-homing is common. This indicates that mobile application ecosystems can be considered both single-homing and a multi-homing market depending on the level of analysis. We term markets like these ‘multilevel two-sided markets’. Taken as a whole, our results offer an explanation as to why several competing mobile application ecosystems can co-exist. For professional application developers, who have the resources to publish their applications in multiple ecosystems, this study implies that multi-homing is a viable component of risk management and a publishing strategy.

The remainder of the study is structured as follows. After this introductory section, we present the theoretical foundation of the study. The third section includes the methodology and data collection process. The results are presented in the fourth section. The fifth section comprises discussion on the results, implications for research and practice, and also limitations and avenues for further inquiry. The last section concludes the study.

2. Background

The number of application developers in mobile application ecosystems generally increases the number of applications available in the marketplace and, hence, the value of the ecosystem to the customer, and vice versa (Holzer and Ondrus, 2011 and Cenamor et al., 2013). Therefore, it is paramount for ecosystem orchestrators to involve both customers and developers in their respective ecosystems. Thus, the success of an ecosystem is dependent on both developers and customers. As a result, mobile application ecosystems can be termed ‘two-sided markets’ (Rochet and Tirole, 2003 and Armstrong, 2006).

Two-sided markets are economic platforms with beneficial cross-group network effects (Armstrong, 2006, Rochet and Tirole, 2003 and Parker and Van Alstyne, 2005). In other words, the value of participating in a platform for agents in one group depends on the number of participants in another group. Network effects can accrue from direct externalities, whereby utility increases as the number of users consuming increases; and indirect externalities, whereby the demand for a product depends on the existence of another product (Katz and Shapiro, 1985). Hence, in the mobile application ecosystems context, two-sided markets can be conceptualized as markets where one or several economic platforms enable interaction between customers, developers, and an orchestrator (Rochet and Tirole, 2003, Rochet and Tirole, 2006 and Armstrong, 2006).

To date, the managerial and scholarly debate on two-sided markets has followed the logic of the credit card business, where the absolute number of merchants accepting a credit card — or the number of applications available in the marketplace — determines the value of the credit card for the end user (see e.g., Chen, 2010, Reuters, 2012, Lee, 2015 and Smith, 2015). However, this approach considers all applications equal and thus ignores the qualitative aspects of the market dynamics. Prior studies have examined winner-takes-all competition (Eisenmann, Parker and Van Alstyne 2006), i.e., a situation where one platform ultimately wins the platform race. Econometric modeling studies, such as Tse (2006) and Sun and Tse (2009) have created models of platform competition that emphasize the role of single- or multi-homing

As there are several competing mobile application ecosystems, customers and developers can participate in more than one ecosystem. Participation in more than one economic platform at a time is termed ‘multi-homing’ (Rochet and Tirole, 2003,Armstrong, 2006 and Sun and Tse, 2009). Multi-homing in two-sided markets is a situation where more than one two-sided platforms exist in the same market, and the two sides of the market (e.g., buyers and sellers) are free to operate in several platforms. For example, an application developer is multi-homing when it offers products in both the Apple App Store and Google Play. Similarly, a customer is multi-homing when he/she utilizes several mobile devices operating in different platforms; however, with a single mobile device, the customer can typically participate in only one ecosystem. Single-homing is the opposite situation: an actor participates only in one ecosystem.

Sellers engage in multi-homing to gain access to larger potential markets (Rochet and Tirole, 2006), to offer their products to the same customers across different platforms, and to reduce dependency on a single market and orchestrator (Idu, van de Zande, and Jansen, 2011). However, multi-homing also generates costs associated with converting a product to different platforms, additional marketing efforts, and also maintaining the product for several platforms (Eisenmann et al., 2006).

Prior research has focused on software vendors’ multi-homing in console games marketplaces (Landsman and Stremersch, 2011), Software as a Service (SaaS) marketplaces (Burkard et al., 2011 and Burkard et al., 2012), and also within Apple’s ecosystem (Idu et al., 2011). In their study on the gaming console market, Landsman and Stremersch (2011) found that the multi-homing of games has a negative effect on sales at the marketplace level, although the negative effect decreases when a platform matures or gains market share. Idu et al. (2011) investigated the iPhone, iPad, and Mac software marketplaces, and found that, out of the top 1,800 applications, 17.2% were multi-homed in two marketplaces and 2.1% in all three marketplaces.

In their theoretical analysis of competitive advantage in two-sided markets, Sun and Tse (2009) highlighted the importance of the distinction between multi-homing and single-homing in determining the winner among competing platforms. Drawing on dynamic systems models, Sun and Tse (2009) argued that, in the context of single-homing, only the largest network will survive and that network size is the critical factor in determining the winner among competing platforms. This is due to the fact that in a two-sided market, network participants become a critical resource for the platform orchestrator (Sun and Tse, 2009). By drawing on two dynamic systems models, Sun and Tse (2009) concluded that a multi-homing market is able to sustain several platforms, whereas a single-homing market is prone to becoming dominated by a single platform.

However, Sun and Tse (2009) pointed out that their analysis of platform competition focused on the quantity of network participants but did not address the quality of participants. This issue is particularly important in the context of mobile application ecosystems, since most of the installations in Google Play were generated from a small set of applications (Hyrynsalmi, Suominen, Mäkilä, and Knuutila 2012).

In the following section we pay special attention to this subset of applications as well as their developers. In doing so, this study moves beyond a volume driven analysis of ecosystem competition to analyze the value propositions made within different platforms and to uncover single- or multi-homing patterns at a value proposition level rather than a developer level.

3. Research process

We collected two datasets for this study: (1) the data of over million applications available in the marketplaces of the three major mobile application ecosystems (described inSection 3.1); and (2) the most popular applications in these marketplaces (Section 3.3). The first set of data is used to study the overall multi-homing rates in the market while the second set gives us an insight into the top applications and their developers. We analyzed the empirical data in three stages. First, we identified the multi-homing of all applications and developers from the three marketplaces (Section 3.2). In the second stage, we analyzed the multi-homing patterns of the nucleus developers (Section 3.3.). In the third stage, we conducted a content analysis of the nucleus developers’ applications to determine whether they could be classified into qualitatively similar content categories.

3.1. Application data collection

In total, empirical data were collected on 1,295,320 applications from the three ecosystems, Google Play, Apple’s App Store, and Windows Phone Store, during December 2012 (Windows Phone Store) and January 2013 (Google Play and Apple App Store). Our data shows that Apple’s App Store had 654,759 applications made by 149,032 developers, Google Play had 542,955 applications made by 88,144 developers, and Windows Phone Store had 94,606 applications made by 25,833 developers.

We employed a web crawler2 (see e.g., Castillo, 2004 and Olston and Najork, 2010) utilizing the Python programming language to gather the application data. The script began from the front page of each marketplace and went through all of the listed pages. It stored all the application identifiers — available on each web page — into a queue of applications to be studied. Duplicate values were removed from the queue. The program also collected various attributes for each identified application from their public profiles in the marketplace, which were stored in a database. Although the available information varied in each marketplace, as a minimum, the name, developer, and price of each application were captured.

3.2. Detecting multi-homing

Following Landsman and Stremersch (2011), we investigated multi-homing by dividing it into two levels: seller- and platform-level multi-homing. Seller-level multi-homing is a situation where a particular seller (i.e., developer) offers its products to customers in more than one ecosystem. Platform-level multi-homing takes place when a particular application is available in several ecosystems. A developer can publish different products in different ecosystems, and a particular product can be published on different platforms by different developers. The latter is a quite common approach in, e.g., video game markets where the porting of a popular video game from one console to other is carried out by another game studio. In mobile application ecosystems, there are a few similar instances. For example, Microsoft Corporation is the publisher of the Facebook application in the Windows Phone ecosystem and Research in Motion Limited is the publisher of the Facebook application in the Blackberry World (previously Blackberry App World) marketplace, which reveals that the actual implementation of front-end applications was performed by third-party developers (i.e. by the orchestrators themselves in these cases). Respectively, Facebook and Facebook Inc are the publishers of Facebook applications in the Android and iOS ecosystems.

We implemented a set of Python scripts to identify multi-homing developers (i.e., seller-level multi-homing) and applications (i.e., platform-level multi-homing). We utilized two matching strategies, namely exact matching and approximate matching. Exact matching requires that the two names under comparison are the same, character by character. However, the comparison is case-insensitive. Approximate matching allows a particular level of dissimilarity in the names under comparison. This strategy is useful in situations where, for example, a developer has employed pre- or postfixes such as ‘Inc.’ or ‘GmbH’ in one marketplace but omitted them in another. For example, ‘Rovio’ is the publisher of Angry Birds in Windows Phone Store, while ‘Rovio Mobile Ltd.’ is the publisher in Google Play and the Apple App Store. We used Levenshtein distance (Levenshtein, 1966) to measure the similarity of two names and employed Python’s difflib library3 for comparisons.

We decided to utilize these two matching strategies because the exact matching gives the lower bound for the total number of multi-homing cases but misses some cases as discussed above. The approximate matching, in turn, detects these cases but can also create false positive matches. We iterated different similarity thresholds in the approximate matching process until there was a clear increase in false positive matches that was determined via visual examination. For each iteration, we randomly selected a dozen of the paired applications and examined whether the created pairs were correct. As a result, the approximate matching offers an upper bound for the number of multi-homed applications. The actual number of multi-homers falls within this interval.

Table 1.Platform-level multi-homing in three application ecosystems.

In all three ecosystems Apple App Store & Windows Phone Store Google Play & Windows Phone Store Google Play & Apple App Store Share of multi-homed applications
Exact 430 1,092 1,886 16,578 1.7%
Approx. (85%) 531 1,348 2,184 21,153 2.1%
Approx. (75%) 645 1,586 2,464 24,598 2.5%
Approx. (50%) 1,453 2,698 3,963 30,930 3.2%

3.3. Data collection of ecosystem nucleuses

As pointed out by Hyrynsalmi et al. (2012), most of the installations in Google Play were generated by a small set of applications. Therefore, we pay special attention to this subset of applications, also referred to as ‘superstars’ (Landsman and Stremersch, 2011), and we consider developers of these superstar applications as the ‘nucleus developers’ of the respective ecosystems. It should be noted that while a keystone actor, i.e. “an active leader in the ecosystem” (Basole, 2009), can also be a nucleus developer, the opposite is seldom the case.

To examine the nucleus developers’ role in the ecosystems, we examined the 3,000 developers’ cumulative shares of the total number of application downloads in the Google Play application marketplace.

We calculated the number of installations from the application dataset — taken from the web crawling of Google Play — with estimated lower and upper bounds and also median values. That is, when an application’s installation category is ‘5-10’, values 5, 10, and 7 were respectively employed as the installation counts. 4Fig. 1 below clearly shows that the top 3,000 developers (3.3% of all) of developers generate the majority (i.e., 85.0 to 85.6%) of all installations in the marketplace. As can be seen from the figure, this finding holds with all three estimation methods. Furthermore, the top 25 developers alone account for approximately one-fifth of all downloads in the marketplace.

Fig 1
Fig. 1.

Cumulative percentages of application installations in Google Play for the top 3,000 developers (3.3% of all developers) with three estimation methods (lower bound, median of bounds, and upper bound).

As drawing an exact line between the superstars and other highly successful applications is problematic and because not all marketplaces publish information regarding installation or download counts, we decided to utilize the top 100 application listings as a proxy that gives a relatively good estimate of the superstar applications. The marketplaces publish different top application listings freely in their webpages. 5 For example, all marketplaces offer a list of the most installed free applications.

We examined the top 100 application listings from free and paid applications for each marketplace. Furthermore, for Google Play and the Apple App Store, we also examined the top grossing listings that — in addition to revenue earned from direct sales — also includes revenues earned from different in-application payments. We collected information on 622 unique applications from the overall top 800 applications, and then manually determined whether the producers of these applications were present in several ecosystems. In concrete terms, we manually investigated the profile information of these applications in the marketplaces, on the developers’ web pages and in press releases as well as newspaper and magazine articles about the developer to detect whether the developer was multi-homing.

4. Results

This section presents the findings of the study. First, we show the overall level of multi-homing among all applications. Second, we focus on application developers and examine their multi-homing behavior. Third, we investigate the multi-homing rates of superstar applications and nucleus developers. Fourth, we analyze the content of these superstar applications and their value propositions.

4.1. Platform-level multi-homing

The results demonstrate that the share of multi-homed applications from the overall number of applications is small. Table 1 shows the results of platform-level multi-homing with exact and approximate matching strategies, and also the results of three similarity threshold values for approximate matching.

Although the number of multi-homed applications doubles when the similarity requirement is loosened, the overall share of multi-homed applications still varies from only 1.7 to 3.2% of all applications. For a single ecosystem, the number of multi-homed applications varies only a little: 2.6 to 4.9% for Apple App Store, 2.6 to 5.3% for Windows Phone Store, and 3.3 to 6.2% for Google Play. Similarly, the number of applications available for all three ecosystems remains under 0.14% in all cases.

We tested different similarity requirement values and found that 50% similarity was the lowest without a clear increase in false positive matches. Nevertheless, the share of multi-homing applications remained low; for example, 3.5% of all unique applications with a threshold value of 40%. In sum, our results demonstrate that the multi-homing publishing strategy is only employed by a small number of developers and, typically, for only a small set of applications.

Google Play and Apple App Store host the majority of multi-homing applications, which is not surprising as the two ecosystems have larger volumes of applications and developers than Windows Phone Store. Interestingly, however, almost three times more developers have published in both Google Play and Windows Phone Store than in Apple App Store and Windows Phone Store. This observation might be due to the different publication processes utilized by the orchestrators (Campbell and Ahmed, 2011 and Cuadrado and Dueñas, 2012) as Google Play and Windows Phone Store have more open acceptance processes than Apple App Store.

Table 2.Percentages of platform- and seller-level multi-homing utilization in different top 100 listings in three ecosystems. The top grossing listings includes applications that have the highest in-application sales.

Platform-level N Seller-level N
Apple App Store Free 48.0% 100 55.2% 87
Paid 45.0% 100 42.3% 71
Grossing 50.0% 100 62.0% 71
Total 47.0% 253 50.3% 175
Google Play Free 55.0% 100 60.0% 80
Paid 43.0% 100 42.0% 81
Grossing 58.0% 100 69.1% 68
Total 50.9% 271 51.6% 192
Windows Phone Store Free 47.0% 100 46.8% 79
Paid 41.0% 100 43.4% 83
Total 43.4% 196 43.7% 151
N = number of applications and application developers in each listing. Windows Phone Store does not publish the top grossing listing.

4.2. Seller-level multi-homing

In brief, the results reveal that seller-level multi-homing is more common than platform-level multi-homing, yet the degree of seller multi-homing is also small. Fig. 2 illustrates the studied seller-level multi-homing in the three ecosystems with a Venn diagram based on the results of the exact matching method. With this matching strategy, we found 248,104 unique developers. From these, 14,261 (i.e., 5.75%) were published in at least two marketplaces, and only 644 (i.e., 0.26%) were published in all three studied ecosystems. For a single ecosystem, the number of multi-homers varies from 8.8% for the Apple App Store to 10.8% for Windows Phone Store and 15.0% for Google Play.

Fig 2
Fig. 2.

Venn diagram of seller-level multi-homing, based on the exact matching strategy, in three application ecosystems.

With the approximate matching method, a threshold of 95% raised the share of multi-homers to 7.2% and the share of the developers who work in all three ecosystems to 0.36%. Although the threshold values under 95% found new true positive matches, there was a considerable increase in the number of visually observed false positives.

4.3. Nucleus developers, superstar applications, and multi-homing

To study the superstar applications, we examined eight top 100 applications’ listings in the three ecosystems, from which we identified 622 unique applications. Of this group, a considerable share, 244 (i.e., 39.2%), were multi-homed. Table 2 presents the number of multi-homing applications for each studied top list in more detail. These superstar applications were published by 429 application developers, and the number of developers producing the content for each ecosystem is even smaller: the top 800 applications were published by 175 developers in Apple App Store, 194 in Google Play, and by 152 in Windows Phone Store. Out of these 429 developers, a significant number (n = 183; i.e., 42.7%) were multi-homing.

In addition, we studied 100 developers that generated the most installations in the Google Play marketplace (i.e., the top one hundred developers from Fig. 1). Although only 47 of these were in the top applications’ developer list, 52 out of 100 developers were multi-homing in at least two mobile application ecosystems. While only Google Play offers these figures, the analysis shows that the magnitude of multi-homing among top developers is similar across the three ecosystems. Finally, when compared to the overall rate of seller- and platform-level multi-homing, the shares of superstar applications and nucleus developers are considerably higher regardless of the employed approach.

In the third and final stage of the analysis, we investigated the content of the superstar applications. First, we examined the 622 applications and wrote a short description of each application and the specific functionality — i.e. the application’s value proposition — that the application offers to the user. Thereafter, we classified applications with common characteristics in order to form a category. For example, ‘personalization,’ ‘games,’ and ‘instant messaging’ were defined as categories. This process of analyzing and coding textual data is typical for the content analysis of textual data (see e.g.Krippendorf, 2013). The results of the content analysis are presented in Table 3.

Table 3.Classification of superstar applications based on the content provided.

Category N Description Example
Game 367 Classified as games by the developers Clash of Clans
Photo and video editing 36 Offer different kinds of effect and editing option for videos and/or photos Instagram
Personalization 34 Change user interface elements; e.g., backgrounds, ring tones Superuser Elite
SNS front-end 19 Front-end for social network services for e.g., Twitter, Facebook Facebook
Music/video player 18 Applications that play music and/or videos Spotify
Assistant, calendar, & notes 16 Small tools utilized to help everyday life; e.g. reminders, listing tools MyCalendar
Mobile front-end for Internet content 15 Specific front-ends for web content such as Wikipedia ESPN, SportsCenter Feed
Short message 11 For sending and receiving short messages WeChat
Shopping front-end 10 Mobile specific front-ends for e-Shopping services Ebay
VoIP service 8 Voice-over-IP applications Skype
Flashlight 8 Flashlight applications Flashflight Free
Office 7 Office-like applications TurboScan
Maps 6 Offer different map services Google Maps
Weight loss 6 Offer weight tracking and tips for weight loss Weight Watchers Mobileloss
Sleep application 6 Plays music that should help one to sleep Sleep Bug Pro
Book-on-demand reader 5 Readers for book-on-demand services iBooks
Voice recognition 4 Utilize voice recognition SoundHound
Dictionary & translate 4 Dictionary and translating applications Translator
Sport tracker 4 For tracking sport activities Endomondo Sports Tracker Pro
Dating 3 Dating services MeetMe
Search 3 Different search services Google
Cloud storage 3 Enables saving and retrieving content from cloud services Google Drive
Music making 2 For playing and recording music instruments GarageBand
Bank front-end 2 Front-ends for banking services Bank of America
Backup 2 Enables storage and retrieval of phone data My Backup Pro
Barcode reader 2 For reading barcodes Barcode Scanner
AR applications 2 Augmented reality applications Sky Map Free
Misc. 19 Applications that could not be merged with any other application to form a category Accurate Tuner Pro
Total 622
N = number of occurrences.

In Table 3, the Game category is by far the largest, containing applications such as Angry Birds and Clash of Clans among others. Facebook is classified under Social Network Service (SNS) front-end and Facebook Messenger under Short message. Instagram is in the second largest category of Photo and video editing. Google Maps and Earth were classified under Maps. Altogether, the majority of applications were easily classified into the categories and only 19 applications were put into the miscellaneous category, containing applications such as GasBuddy, Official eBay Android App and Longman Dictionary.

The content analysis revealed that the majority of the superstar applications (i.e., 59%) were different kinds of games. In addition, there were rather specific categories among the superstar applications. For example, the listing includes eight different applications that turn a phone into a flashlight and six applications that play sounds that aim to help send a user to sleep. Interestingly, we did not observe any major differences between the three competing ecosystems as the relative percentages of the categories are similar among each of the three mobile application ecosystems.

In summary, the majority of the most popular content, such as Facebook, weather apps and the most popular games, is either offered by the original developers or imitated by other developers in all three ecosystems. That is, the levels of multi-homing among superstar applications and nucleus developers are rather high. This observation contrasts with prior studies that suggested that the level of multi-homing is, at most, small (Boudreau, 2007 and Boudreau, 2012). Altogether, our results have several implications for both research and practice that are discussed in the following section.

5. Discussion

This section presents the key findings of the study. Thereafter, we compare our results against prior theory on multi-homing in two-sided markets (Sun and Tse 2009), and then on the often-stated argument concerning the importance of a large developer base and the volume of complementary products to the success of an ecosystem (Cenamor et al., 2013). This is followed by a discussion, from a more practice-oriented perspective, on the effect of multi-homing on the competition between software ecosystems. We conclude by discussing the limitations of the study and offering avenues for further research.

5.1. Key findings

We have condensed the results of the study into three key findings:

1.

When looking at the market as a whole, mobile application ecosystems are single-homing markets.

2.

However, when focusing only on the most downloaded applications we find that the mobile application ecosystems are multi-homing markets.

3.

The value propositions of the superstar applications are relatively similar across the ecosystems.

First, our results imply that when looking at the market for mobile applications as a whole, it is a single-homing market. As indicated by our results from both the platform- and seller-level multi-homing subsets, only a small set of applications (i.e., 1.7 to 3.2%) and developers (i.e., 5.8 to 7.2%) are multi-homing.

Second, when looking at the most popular applications and the developers of these applications, the market is a multi-homing market. Our results indicate that multi-homing rates among the most popular applications, that is, superstars (i.e., 39.2%) and their nucleus developers (i.e., 42.7%), are almost ten time that of their competitors when compared to all other applications and developers in the market.

Third, our content analysis of the superstar applications in all three mobile application ecosystems offers empirical information on the value propositions in the mobile applications market. Our results show that superstars are largely basic applications such as flashlight and short message services. Furthermore, a considerable share of these applications are actually only front-ends for services offered on the web, which implies that third parties can replicate the content of many superstar applications with relative ease. In addition, our analysis of superstar applications indicated that the actual set of nucleus developers appears to be rather small and is mainly comprised of game producers.

5.2. Theoretical implications

Our study accumulates understanding on software ecosystems in three areas. First, our research offers novel insight into the influence of multi-homing on competition between ecosystems. Our findings indicate that the level of multi-homing differs considerably between the overall market and superstar applications. According to Sun and Tse’s (2009) theory of platform competition, a multi-homing market can sustain several competing ecosystems; however, a single-homing market eventually evolves into only one prevailing ecosystem. When examining multi-homing at the level of the whole content of the three ecosystems, our findings support Sun and Tse’s (2009) assertion that the market would evolve into one dominant ecosystem.

At the same time, our results also indicate that multi-homing is much more common for superstar applications and nucleus developers. According to Sun and Tse (2009), this implies that the market would be able to sustain more than one ecosystem. Overall, our research advances Sun and Tse’s (2009) model of platform competition by emphasizing that multi-homing can manifest differently within a single group of actors in the market. However, further work is needed to understand multilevel two-sided markets in which the actors’ multi-homing behavior and their value for the ecosystem differs substantially between the small number of nucleus developers and the vast majority of developers.

Second, our findings demonstrate that the quality of application developers is far more important than their number. As our results show, only 3% of application developers are responsible for more than 80% of installations in a single ecosystem. Hence, after reaching a certain critical threshold, the quality of developers is far more important in terms of generating downloads from the marketplace. Therefore, from the mobile application ecosystem orchestrators’ vantage point, attracting and maintaining nucleus developers is far more important than having a large developer base per se.

As a result, we depart from Sun and Tse (2009) who emphasized the sheer size of the two sides of the market as a decisive factor in platform competition. In addition, our findings differ from the extant research (e.g., Yamakami, 2010, Holzer and Ondrus, 2011 and Schultz et al., 2011) that, grounded on network externalities (Katz and Shapiro, 1985), somewhat simplistically argues that a large base of developers leads to a large number of applications that, in turn, leads to an increasing number of end-users, and vice versa. As a result, by differentiating between the overall supply of applications and superstar applications, our study offers a more fine-grained view of the competition between mobile ecosystems.

5.3. Implications for practice

First, our observation that nucleus developers are active in multi-homing implies that the market might be able to sustain more than one ecosystem, particularly if the ecosystems are able to focus on specific customer segments and differentiate their offerings (Kouris and Kleer, 2012). This implies that several competing mobile application ecosystems can survive and exist in the future.

Second, our findings imply that application marketplaces are not used to differentiating their ecosystem from that of competitors. The results of the content analysis show that the content of the most installed applications are similar in the three leading mobile applications ecosystems. This supports the findings by Hyrynsalmi et al. (2013) who did not find differentiation between the consumers nor the application offerings of the ecosystems. In addition, our analysis of the content of the most downloaded applications emphasizes the importance of games for attracting users to the marketplace.

As a result, the similar value propositions in all three major mobile application ecosystems support the existence of multiple platforms. This situation is similar to, for example, credit cards, where multiple competing credit card companies with very similar value propositions co-exist. Hence, the mobile application marketplaces are thus not a source of differentiation for the different platforms. Future research could investigate whether the mobile application marketplace could be a source of differentiation and how this could be achieved.

Third, based on our empirical findings, we question the number-driven success metrics employed to evaluate mobile application ecosystems (e.g., Gupta, 2012 and Reuters, 2012). Furthermore, as pointed out by Hyrynsalmi et al. (2012), only a small share of all applications published in the marketplace are actually downloaded, and even fewer are actually used by customers. In other words, customers are either not interested in, or they do not notice, most of the content available in the marketplace.

As a result, we advise practitioners and researchers to pay increasing attention to qualitative factors that enable the creation of successful application ecosystems (see e.g., Gonçalves and Ballon, 2011 and Eaton et al., 2015). In addition, we suggest ecosystem orchestrators and industry analysts move from counting the number of developers in the market toward evaluating the value of each developer.

Fourth and finally, our results further imply that nucleus developers’ bargaining power over ecosystem orchestrators is likely to increase in the future. This is due to the fact that a few nucleus developers create the applications that constitute the majority of installations in the application marketplace. Hence, attracting and sustaining nucleus developers is essential for maintaining an ecosystem’s competitiveness.

The ecosystem orchestrators can try to compensate for their lack of attractiveness among developers by developing popular applications in-house. For example, Facebook applications for Windows Phone and Blackberry have been developed by the ecosystem orchestrators instead of Facebook. Hence, the presence of the application appears to be even more important for the two ecosystem orchestrators than it is for Facebook.

Overall, since multi-homing is a common practice among nucleus developers, creating a clearly differentiated application offering is very difficult for ecosystem orchestrators. For application developers, our results imply that, despite the extra costs for porting the application to other ecosystems, multi-homing seems to be a viable distribution strategy.

5.4. Limitations and future research avenues

As with any other, this study is subject to a number of limitations. First, the data were collected over a short period of time. Second, the data gathering scripts were run from a server located in Finland. Therefore, particular applications that are available only for customers in, for example, the US might not have been shown due to the location of the server we employed. Third, our study also omits competition among multiple application stores serving the same platform. For example, there are several application stores for the Android platform. Due to the lack of porting costs, the competition dynamics between application stores within an ecosystem are different from cross-ecosystem competition, and fall outside of the scope of this study. Fourth, the utilized matching strategies are only approximations of the actual situation.

Fifth, we employed the top application listings as a proxy of superstars. The top listings change over time and are based on download numbers that do not reveal an application’s actual level of use. For example, prior survey research indicates that mobile gaming is not of interest to customers (see e.g., Economides and Grousopoulou, 2009,Bouwman et al., 2010 and Suominen et al., 2014). However, our results show that games form the majority of the most installed applications. This might be a result of a pattern whereby a user downloads several games, tries them all once, and then removes uninteresting ones from the device. Furthermore, it is possible that all applications included in the listings are not real superstars. For example, our data contained eight flashlight applications. Nevertheless, our approach to include all top listed applications can be justified by the fact that we aimed to include all potential superstars. Furthermore, the use of top applications lists omits developers that have many successful applications but lack a superstar. The number of installations for each application is available only on Google Play. We utilized this information and examined these kinds of developers, and demonstrated that the seller-level multi-homing ratio is similar for developers identified based on the listings of the most popular applications and by employing the figures offered by Google play.

Drawing on the implications and limitations of the present study, we suggest three main avenues for future inquiry. First, we have advanced the model of Sun and Tse (2009) on the influence of multi-homing in platform competition in two-sided markets in order to capture the characteristics of mobile application ecosystems. To support the building of theories that illuminate the dynamics of business ecosystems, we encourage further research on different kinds of ecosystems. For example, the video game business shows similar tendencies of being a multilevel two-sided market like the mobile application market (c.f. Landsman and Stremersch, 2011).

With regard to the second area of future research, a business ecosystem should, among other success factors, support niche and opportunity creation (Iansiti and Levien 2004). However, we did not address different aspects of niche creation inside an ecosystem when discussing the numbers of developers and applications. As a result, we encourage further research to create measures for niche and opportunity creation, and to investigate whether the size of an ecosystem affects niche creation and the success of these niches.

Third, future research could investigate whether the large number of applications available in the marketplace adds value to the customer through, for example, increased opportunities to select new products and the pleasure obtained from browsing the selection, or whether a large offering has an adverse effect due to increased search costs.

6. Conclusion

This study assessed multi-homing in mobile application ecosystems with a data of nearly 1.3 million applications from Apple’s App Store, Google Play, and Windows Phone Store. The results demonstrate that only a rather small subset of all applications and developers are multi-homing. However, among the most popular applications and their developers, the multi-homing rates are tenfold. Third, we have shown that the value propositions of superstar applications are rather similar between different ecosystems.

The study advances our theoretical understanding of the influence of multi-homing on competition between ecosystems by emphasizing the quality of the proposed content over the sheer size of an ecosystem. The results also indicate that several competing mobile application ecosystems can survive.

Acknowledgments

The authors wish to thank Ph.D. Ville Harkke, Ph.D. Kai Kimppa, University teacher Antero Järvi, Associate Professor Marko Seppänen, Adjunct Professor Timo Knuutila and D.Sc. Tuomas Mäkilä for discussions on the topic as well as the unnamed reviewers for their insightful comments on different versions of the manuscript. The authors wish to express their gratitude to B.Sc Miika Oja-Nisula for his technical contributions in data gathering. Furthermore, Sami Hyrynsalmi is grateful for the Nokia Foundation for financial supporting of his dissertation work on mobile ecosystems. This work was supported by Academy of Finland with research grant (288609) awarded for the project “Modeling Science and Technology Systems Through Massive Data Collections” (Suominen) and research grant (257412) “Digital Engagement: Uncovering the Customer Value of Social Media” (Mäntymäki).

Vitae

Sami Hyrynsalmi is a nerd who has always enjoyed working with programming and computers. After graduating as Master of Science in Technology in software engineering from University of Turku in 2009, he decided to focus on the real issues and started his doctoral dissertation work on mobile application ecosystems. After successfully defending his thesis in 2014, he has focused on various themes from software security to business ecosystems.

Arho Suominen is Senior Scientist at the Innovations, Economy, and Policy unit at VTT Technical Research Centre of Finland. His research focuses on qualitative and quantitative assessment of emerging technologies and innovation management. He has published work in several journals such as Scientometrics, Journal of Systems and Software, and Futures.

Matti Mäntymäki is an Academy of Finland post-doctoral researcher at Turku School of Economics, Finland. He holds a D.Sc. (Econ. & Bus. Adm) in information systems science. His research has been published in outlets such as International Journal of Information Management, Computers in Human Behavior and Behavior & Information Technology.

References

Url: http://www.sciencedirect.com/science/article/pii/S0164121215002010

[Ref] Peter Lawrey Describes Petabyte JVMs

by Charles Humble on Mar 09, 2015

It’s not unusual in financial service systems to have problems that requires significant vertical, as opposed to horizontal, scaling. During his talk at QCon London Peter Lawrey described the particular problems that occur when you scale a Java application beyond 32GB.

Starting from the observation that Java responds much faster if you can keep your data in memory rather than going to a database or some other external resource, Lawrey described the kind of problems you hit when you go above the 32GB range that Java is reasonably comfortable in. As you’d expect GC pause times become a major problem, but also memory efficiency drops significantly, and you have the problem of how to recover in the event of a failure.

Suppose your system dies and you want to pull in your data to rebuild that system. If you are pulling data in at around 100 MB/s, which is not an unreasonable rate (it’s about the saturation point of a gigabit line and if you have a faster network you may not want to be maxing out those connections because you still want to be able to handle user requests), then 10GB takes about 2 minutes to recover, but as your data sets get larger you are getting into hours or even days. A Petabyte is about 4 months which is obviously unrealistic, particularly if your data is also changing.

Generally this problem is solved by replicating what is going on in a database. Lawrey mentioned Speedment SQL Reflector as one example of a product that can be used to do this.

The memory efficiency problems stem from the observation that as a language Java tends to produce a lot of references. Java 7 has compressed Oops turned on by default if your heap is less than 32GB. This means it can use 32 bit references, instead of 64 bit references, for every reference. If your heap is below about 4GB the JVM can use direct addressing. Above this it uses the fact that every object in Java is aligned by 8 bytes which means that the bottom 3 bits of every address are always zero. Intel processors have a feature that allows them to intrinsically multiply a number by 8 before it is used as an address – so there is hardware support for multiplying whatever that number is by 8, allowing us to get 32GB instead of only 4.

Compressed Oops with 8 byte alignment

Moving to Java 8 doubles the limit, since Java 8 adds support for an object alignment multiplier of 16 allowing it to address a 64GB heap using a 32GB reference. However, if you need to go beyond 64GB in Java 8 then your only option is to use 64 bit references, which adds a small but significant overhead on main memory use. It also reduces the efficiency of CPU caches as fewer objects can fit in.

At this scale GC pause times can become a significant issue. Lawery noted that Azul Zing is fully concurrent collector with worst case pause times around the 1-10 ms. Zing uses an extra level of indirection which allows it to move an object whilst it is being used, and it will scale to 100s of GBs.

Another approach is to have a library that does the memory management for you but in Java code – a product like Terracotta BigMemory or Hazelcast High-Density Memory Store can cache large amounts of data either within a single machine or across multiple machines. BigMemory uses off heap memory to store the bulk of its data. The difference between these solutions and Zing is that you only get the benefit of the extra memory if you go through their library.

Another limit you hit is that a lot of systems have NUMA regions limited to a terabyte. This isn’t set in stone, but Ivy and Sandybridge Xeon processors are limited to addressing 40 bits of memory. In Haswell this has been lifted to 46 bits. Each socket has “local” access to a bank of memory, however to access the other bank it needs to use a bus. This is much slower, and the GC in Java can perform very poorly if it doesn’t sit within one NUMA region, because the collector assumes it has random access to the memory – and so if you see a GC go across NUMA regions it can suddenly slow down dramatically and will also perform somewhat erratically.

NUMA Regions (~40 bits)

To get beyond 40 bits a lot of CPUs support a 48 bit address space – both the Intel and AMD 64bit chips do this. What they do is that they have multi-tiered lookup of how to find any given page in memory. This means the page in memory may be in a different place from where your application thinks it is, and in fact you can have more virtual memory in your application that you have physical memory. The virtual memory is generally in the form of memory map files taken from disk. So this introduces a 48 bit limit for the maximum size of an application. Within CentOS that is 256TB, under Windows 192TB. The point really is that memory mappings are not limited to main memory size.

Multiple JVMs within the same machine can share the same shared memory. Lawrey described a design he had put together for a potential client where they needed a Petabyte JVM (50 bits). You can’t actually map a Petabyte all in one go, so you have to cache the memory mappings to fit within the 256TB limit. In this case the prospect is looking to attach a Petabyte of Flash drive to a single machine (and have more than one of these).

This is what it looks like:

PeteByte JVM

It’s running on a machine with 6TB and 6 NUMA regions. Since, as previously noted, we want to try and restrict the heap or at least the JVM to a single NUMA region, you end up with 5 JVMs with a heap of up to 64GB each, and memory mapped caches for both the indexes and the raw data, plus a 6th NUMA region reserved just for the operating system and monitoring tasks.

Replication is essential – restoring such a large machine would take a considerable length of time, and since the machine is so complex the chance of a failure is quite high.

In the Q&A that followed an attendee asked about the trade-offs of scaling this way versus horizontal scaling. Lawrey explained that in this use case they need something close to random access and for that they don’t want to be going across the network to get their data. “The interesting thing,” observed Lawrey, “is that you could even consider doing this in Java”.

Reference: http://www.infoq.com/news/2015/03/petabyte-jvms

[Ref] Microservices: Decomposing Applications for Deployability and Scalability

 Posted by Chris Richardson on May 25, 2014

This article describes the increasingly popular Microservice architecture pattern. The big idea behind microservices is to architect large, complex and long-lived applications as a set of cohesive services that evolve over time. The term microservices strongly suggests that the services should be small.

Some in the community even advocate building 10-100 LOC services. However, while it’s desirable to have small services, that should not be the main goal. Instead, you should aim to decompose your system into services to solve the kinds of development and deployment problems discussed below. Some services might indeed be tiny where as others might be quite large.

The essence of the microservice architecture is not new. The concept of a distributed system is very old. The microservice architecture also resembles SOA.

It has even been called lightweight or fine-grained SOA. And indeed, one way to think about microservice architecture is that it’s SOA without the commercialization and perceived baggage of WS* and ESB. Despite not being an entirely novel idea, the microservice architecture is still worthy of discussion since it is different than traditional SOA and, more importantly, it solves many of the problems that many organizations currently suffer from.

In this article, you will learn about the motivations for using the microservice architecture and how it compares with the more traditional, monolithic architecture. We discuss the benefits and drawbacks of microservices. You will learn how to solve some of the key technical challenges with using the microservice architecture including inter-service communication and distributed data management.

The (sometimes evil) monolith

Since the earliest days of developing applications for the web, the most widely used enterprise application architecture has been one that packages all the application’s server-side components into a single unit. Many enterprise Java applications consist of a single WAR or EAR file. The same is true of other applications written in other languages such as Ruby and even C++.

Let’s imagine, for example, that you are building an online store that takes orders from customers, verifies inventory and available credit, and ships them. It’s quite likely that you would build an application like the one shown in figure 1.

Figure 1 – the monolithic architecture

The application consists of several components including the StoreFront UI, which implements the user interface, along with services for managing the product catalog, processing orders and managing the customer’s account. These services share a domain model consisting of entities such as Product, Order, and Customer.

Despite having a logically modular design, the application is deployed as a monolith. For example, if you were using Java then the application would consist of a single WAR file running on a web container such as Tomcat. The Rails version of the application would consist of a single directory hierarchy deployed using either, for example, Phusion Passenger on Apache/Nginx or JRuby on Tomcat.

This so-called monolithic architecture has a number of benefits. Monolithic applications are simple to develop since IDEs and other development tools are oriented around developing a single application. They are easy to test since you just need to launch the one application. Monolithic applications are also simple to deploy since you just have to copy the deployment unit – a file or directory – to a machine running the appropriate kind of server.

This approach works well for relatively small applications. However, the monolithic architecture becomes unwieldy for complex applications. A large monolithic application can be difficult for developers to understand and maintain. It is also an obstacle to frequent deployments. To deploy changes to one application component you have to build and deploy the entire monolith, which can be complex, risky, time consuming, require the coordination of many developers and result in long test cycles.

A monolithic architecture also makes it difficult to trial and adopt new technologies. It’s difficult, for example, to try out a new infrastructure framework without rewriting the entire application, which is risky and impractical. Consequently, you are often stuck with the technology choices that you made at the start of the project. In other words, the monolithic architecture doesn’t scale to support large, long-lived applications.

Decomposing applications into services

Fortunately, there are other architectural styles that do scale. The book, The Art of Scalability,describes a really useful, three dimension scalability model: the scale cube, which is shown in Figure 2.

Figure 2 – the scale cube

In this model, the commonly used approach of scaling an application by running multiple identical copies of the application behind a load balancer is known as X-axis scaling. That’s a great way of improving the capacity and the availability of an application.

When using Z-axis scaling each server runs an identical copy of the code. In this respect, it’s similar to X-axis scaling. The big difference is that each server is responsible for only a subset of the data. Some component of the system is responsible for routing each request to the appropriate server. One commonly used routing criteria is an attribute of the request such as the primary key of the entity being accessed, i.e. sharding. Another common routing criteria is the customer type. For example, an application might provide paying customers with a higher SLA than free customers by routing their requests to a different set of servers with more capacity.

Z-axis scaling, like X-axis scaling, improves the application’s capacity and availability. However, neither approach solves the problems of increasing development and application complexity. To solve those problems we need to apply Y-axis scaling.

The 3rd dimension to scaling is Y-axis scaling or functional decomposition. Where as Z-axis scaling splits things that are similar, Y-axis scaling splits things that are different. At the application tier, Y-axis scaling splits a monolithic application into a set of services. Each service implements a set of related functionality such as order management, customer management etc.

Deciding how to partition a system into a set of services is very much an art but there are number of strategies that can help. One approach is to partition services by verb or use case. For example, later on you will see that the partitioned online store has a Checkout UI service, which implements the UI for the checkout use case.

Another partitioning approach is to partition the system by nouns or resources. This kind of service is responsible for all operations that operate on entities/resources of a given type. For example, later on you will see how it makes sense for the online store to have a Catalog service, which manages the catalog of products.

Ideally, each service should have only a small set of responsibilities. (Uncle) Bob Martintalks[PDF] about designing classes using the Single Responsible Principle (SRP). The SRP defines a responsibility of class as a reason to change, and that a class should only have one reason to change. It make sense to apply the SRP to service design as well.

Another analogy that helps with service design is the design of Unix utilities. Unix provides a large number of utilities such as grep, cat and find. Each utility does exactly one thing, often exceptionally well, and can be combined with other utilities using a shell script to perform complex tasks. It makes sense to model services on Unix utilities and create single function services.

It’s important to note that the goal of decomposition is not to have tiny (e.g. 10-100 LOC as some argue) services simply for the sake of it. Instead, the goal is to address the problems and limitations of the monolithic architecture described above. Some services could very well be tiny but others will be substantially larger.

If we apply Y-axis decomposition to the example application we get the architecture shown in figure 3.

Figure 3 – the microservice architecture

The decomposed application consists of various frontend services that implement different parts of the user interface and multiple backend services. The front-services include the Catalog UI, which implements product search and browsing, and Checkout UI, which implements the shopping cart and the checkout process. The backend services include the same logical services that were described at the start of this article. We have turned each of the application’s main logical components into a standalone service. Let’s look at the consequences of doing that.

Benefits and drawbacks of a microservice architecture

This architecture has a number of benefits. First, each microservice is relatively small. The code is easier for a developer to understand. The small code base doesn’t slow down the IDE making developers more productive. Also, each service typically starts a lot faster than a large monolith, which again makes developers more productive, and speeds up deployments

Second, each service can be deployed independently of other services. If the developers responsible for a service need to deploy a change that’s local to that service they do not need to coordinate with other developers. They can simply deploy their changes. A microservice architecture makes continuous deployment feasible.

Third, each service can be scaled independently of other services using X-axis cloning and Z-axis partitioning. Moreover, each service can be deployed on hardware that is best suited to its resource requirements. This is quite different than when using a monolithic architecture where components with wildly different resource requirements – e.g. CPU intensive vs. memory intensive – must be deployed together.

The microservice architecture makes it easier to scale development. You can organize the development effort around multiple, small (e.g. two pizza) teams. Each team is solely responsible for the development and deployment of a single service or a collection of related services. Each team can develop, deploy and scale their service independently of all of the other teams.

The microservice architecture also improves fault isolation. For example, a memory leak in one service only affects that service. Other services will continue to handle requests normally. In comparison, one misbehaving component of a monolithic architecture will bring down the entire system.

Last but not least, the microservice architecture eliminates any long-term commitment to a technology stack. In principle, when developing a new service the developers are free to pick whatever language and frameworks are best suited for that service. Of course, in many organizations it makes sense to restrict the choices but the key point is that you aren’t constrained by past decisions.

Moreover, because the services are small, it becomes practical to rewrite them using better languages and technologies. It also means that if the trial of a new technology fails you can throw away that work without risking the entire project. This is quite different than when using a monolithic architecture, where your initial technology choices severely constrain your ability to use different languages and frameworks in the future.

Drawbacks

Of course, no technology is a silver bullet, and the microservice architecture has a number of significant drawbacks and issues. First, developers must deal with the additional complexity of creating a distributed system. Developers must implement an inter-process communication mechanism. Implementing use cases that span multiple services without using distributed transactions is difficult. IDEs and other development tools are focused on building monolithic applications and don’t provide explicit support for developing distributed applications. Writing automated tests that involve multiple services is challenging. These are all issues that you don’t have to deal with in a monolithic architecture.

The microservice architecture also introduces significant operational complexity. There are many more moving parts – multiple instances of different types of service – that must be managed in production. To do this successful you need a high-level of automation, either home-grown code or a PaaS-like technology such as Netflix Asgard and related components, or an off the shelf PaaS such as Pivotal Cloud Foundry.

Also, deploying features that span multiple services requires careful coordination between the various development teams. You have to create a rollout plan that orders service deployments based on the dependencies between services. That’s quite different than when using a monolithic architecture where you can easily deploy updates to multiple components atomically.

Another challenge with using the microservice architecture is deciding at what point during the lifecycle of the application you should use this architecture. When developing the first version of an application, you often do not have the problems that this architecture solves. Moreover, using an elaborate, distributed architecture will slow down development.

This can be a major dilemma for startups whose biggest challenge is often how to rapidly evolve the business model and accompanying application. Using Y-axis splits might make it much more difficult to iterate rapidly. Later on, however, when the challenge is how to scale and you need to use functional decomposition, then tangled dependencies might make it difficult to decompose your monolithic application into a set of services.

Because of these issues, adopting a microservice architecture should not be undertaken lightly. However, for applications that need to scale, such as a consumer-facing web application or SaaS application, it is usually the right choice. Well known sites such as eBay [PDF],Amazon.com, Groupon, and Gilt have all evolved from a monolithic architecture to a microservice architecture.

Now that we have looked at the benefits and drawbacks let’s look at a couple of key design issues within a microservice architecture, beginning with communication mechanisms within the application and between the application and its clients.

Communication mechanisms in a microservice architecture

In a microservice architecture, the patterns of communication between clients and the application, as well as between application components, are different than in a monolithic application. Let’s first look at the issue of how the application’s clients interact with the microservices. After that we will look at communication mechanisms within the application.

API gateway pattern

In a monolithic architecture, clients of the application, such as web browsers and native applications, make HTTP requests via a load balancer to one of N identical instances of the application. But in a microservice architecture, the monolith has been replaced by a collection of services. Consequently, a key question we need to answer is what do the clients interact with?

An application client, such as a native mobile application, could make RESTful HTTP requests to the individual services as shown in figure 4.

Figure 4 – calling services directly

On the surface this might seem attractive. However, there is likely to be a significant mismatch in granularity between the APIs of the individual services and data required by the clients. For example, displaying one web page could potentially require calls to large numbers of services. Amazon.com, for example, describes how some pages require calls to 100+ services. Making that many requests, even over a high-speed internet connection, let alone a lower-bandwidth, higher-latency mobile network, would be very inefficient and result in a poor user experience.

A much better approach is for clients to make a small number of requests per-page, perhaps as few as one, over the Internet to a front-end server known as an API gateway, which is shown in Figure 5.

Figure 5 – API gateway

The API gateway sits between the application’s clients and the microservices. It provides APIs that are tailored to the client. The API gateway provides a coarse-grained API to mobile clients and a finer-grained API to desktop clients that use a high-performance network. In this example, the desktop clients makes multiple requests to retrieve information about a product, where as a mobile client makes a single request.

The API gateway handles incoming requests by making requests to some number of microservices over the high-performance LAN. Netflix, for example, describes how each request fans out to on average six backend services. In this example, fine-grained requests from a desktop client are simply proxied to the corresponding service, whereas each coarse-grained request from a mobile client is handled by aggregating the results of calling multiple services.

Not only does the API gateway optimize communication between clients and the application, but it also encapsulates the details of the microservices. This enables the microservices to evolve without impacting the clients. For examples, two microservices might be merged. Another microservice might be partitioned into two or more services. Only the API gateway needs to be updated to reflect these changes. The clients are unaffected.

Now that we have looked at how the API gateway mediates between the application and its clients, let’s now look at how to implement communication between microservices.

Inter-service communication mechanisms

Another major difference with the microservice architecture is how the different components of the application interact. In a monolithic application, components call one another via regular method calls. But in a microservice architecture, different services run in different processes. Consequently, services must use an inter-process communication (IPC) to communicate.

Synchronous HTTP

There are two main approaches to inter-process communication in a microservice architecture. One option is to a synchronous HTTP-based mechanism such as REST or SOAP. This is a simple and familiar IPC mechanism. It’s firewall friendly so it works across the Internet and implementing the request/reply style of communication is easy. The downside of HTTP is that it doesn’t support other patterns of communication such as publish-subscribe.

Another limitation is that both the client and the server must be simultaneously available, which is not always the case since distributed systems are prone to partial failures. Also, an HTTP client needs to know the host and the port of the server. While this sounds simple, it’s not entirely straightforward, especially in a cloud deployment that uses auto-scaling where service instances are ephemeral. Applications need to use a service discovery mechanism. Some applications use a service registry such as Apache ZooKeeper or Netflix Eureka. In other applications, services must register with a load balancer, such as an internal ELB in an Amazon VPC.

Asynchronous messaging

An alternative to synchronous HTTP is an asynchronous message-based mechanism such as an AMQP-based message broker. This approach has a number of benefits. It decouples message producers from message consumers. The message broker will buffer messages until the consumer is able to process them. Producers are completely unaware of the consumers. The producer simply talks to the message broker and does not need to use a service discovery mechanism. Message-based communication also supports a variety of communication patterns including one-way requests and publish-subscribe. One downside of using messaging is needing a message broker, which is yet another moving part that adds to the complexity of the system. Another downside is that request/reply-style communication is not a natural fit.

There are pros and cons of both approaches. Applications are likely to use a mixture of the two. For example, in the next section, which discusses how to solve data management problems that arise in a partitioned architecture, you will see how both HTTP and messaging are used.

Decentralized data management

A consequence of decomposing the application into services is that the database is also partitioned. To ensure loose coupling, each service has its own database (schema). Moreover, different services might use different types of database – a so-called polyglot persistence architecture. For example, a service that needs ACID transactions might use a relational database, whereas a service that is manipulating a social network might use a graph database. Partitioning the database is essential, but we now have a new problem to solve: how to handle those requests that access data owned by multiple services. Let’s first look at how to handle read requests and then look at update requests.

Handling reads

For example, consider an online store where each customer has a credit limit. When a customer attempts to place an order the system must verify that the sum of all open orders would not exceed their credit limit. It would be trivial to implement this business rule in a monolithic application. But it’s much more difficult to implement this check in a system where customers are managed by the CustomerService and orders are managed by the OrderService. Somehow the OrderService must access the credit limit maintained by the CustomerService.

One solution is for the OrderService to retrieve the credit limit by making an RPC call to the CustomerService. This approach is simple to implement and ensures that the OrderService always has the most current credit limit. The downside is that it reduces availability because the CustomerService must be running in order to place an order. It also increases response time because of the extra RPC call.

Another approach is for the OrderService to store a copy of the credit limit. This eliminates the need to make a request to the CustomerService and so improves availability and reduces response time. It does mean, however, that we must implement a mechanism to update the OrderService’s copy of the credit limit whenever it changes in the CustomerService.

Handling update requests

The problem of keeping the credit limit up to date in OrderService is an example of the more general problem of handling requests that update data owned by multiple services.

Distributed transactions

One solution, of course, is to use distributed transactions. For example, when updating a customer’s credit limit, the CustomerService could use a distributed transaction to update both its credit limit and the corresponding credit limit maintained by the OrderService. Using distributed transactions would ensure that the data is always consistent. The downside of using them is that it reduces system availability since all participants must be available in order for the transaction to commit. Moreover, distributed transactions really have fallen out of favor and are generally not supported by modern software stacks, e.g. REST, NoSQL databases, etc.

Event-driven asynchronous updates

The other approach is to use event-driven asynchronous replication. Services publish events announcing that some data has changed. Other services subscribe to those events and update their data. For example, when the CustomerService updates a customer’s credit limit it publishes a CustomerCreditLimitUpdatedEvent, which contains the customer id and the new credit limit. The OrderService subscribes to these events and updates its copy of the credit limit. The flow of events is shown in Figure 6.

Figure 6 – replicating the credit limit using events

A major benefit of this approach is that producers and consumers of the events are decoupled. Not only does this simplify development but compared to distributed transactions it improves availability. If a consumer isn’t available to process an event then the message broker will queue the event until it can. A major drawback of this approach is that it trades consistency for availability. The application has to be written in a way that can tolerate eventually consistent data. Developers might also need to implement compensating transactions to perform logical rollbacks. Despite these drawbacks, however, this is the preferred approach for many applications.

Refactoring a monolith

Unfortunately, we don’t always have the luxury of working on a brand new, greenfield project. There is a pretty good chance that you are on the team that’s responsible for a huge, scary monolithic application. And, every day you are dealing with the problems described at the start of this article. The good news is that there are techniques that you can use to decompose your monolithic application into a set of services.

First, stop making the problem worse. Don’t continue to implement significant new functionality by adding code to the monolith. Instead, you should find a way to implement new functionality as a standalone service as shown in Figure 7. This probably won’t be easy. You will have to write messy, complex glue code to integrate the service with the monolith. But it’s a good first step in breaking apart the monolith.

Figure 7 – extracting a service

Second, identify a component of the monolith to turn into a cohesive, standalone service. Good candidates for extraction include components that are constantly changing, or components that have conflicting resource requirements, such as large in-memory caches or CPU intensive operations. The presentation tier is also another good candidate. You then turn the component into a service and write glue code to integrate with the rest of the application. Once again, this will probably be painful but it enables you to incrementally migrate to a microservice architecture.

Summary

The monolithic architecture pattern is a commonly used pattern for building enterprise applications. It works reasonable well for small applications: developing, testing and deploying small monolithic applications is relatively simple. However, for large, complex applications, the monolithic architecture becomes an obstacle to development and deployment. Continuous delivery is difficult to do and you are often permanently locked into your initial technology choices. For large applications, it makes more sense to use a microservice architecture that decomposes the application into a set of services.

The microservice architecture has a number of advantages. For example, individual services are easier to understand and can be developed and deployed independently of other services. It is also a lot easier to use new languages and frameworks because you can try out new technologies one service at a time. A microservice architecture also has some significant drawbacks. In particular, applications are much more complex and have many more moving parts. You need a high-level of automation, such as a PaaS, to use microservices effectively. You also need to deal with some complex distributed data management issues when developing microservices. Despite the drawbacks, a microservice architecture makes sense for large, complex applications that are evolving rapidly, especially for SaaS-style applications.

There are various strategies for incrementally evolving an existing monolithic application to a microservice architecture. Developers should implement new functionality as a standalone service and write glue code to integrate the service with the monolith. It also makes sense to iteratively identify components to extract from the monolith and turn into services. While the evolution is not easy, it’s better than trying to develop and maintain an unwieldy monolithic application.

About the Author

Chris Richardson is a developer and architect. He is a Java Champion, a JavaOne rock star and the author of POJOs in Action, which describes how to build enterprise Java applications with POJOs and frameworks such as Spring and Hibernate. Chris is also the founder of the original Cloud Foundry, an early Java PaaS for Amazon EC2. He consults with organizations to improve how they develop and deploy applications using technologies such as cloud computing, microservices, and NoSQL. Twitter @crichardson.

Reference: http://www.infoq.com/articles/microservices-intro

[Ref] High Quality User Stories – INVEST

INVEST – Definition

The acronym INVEST helps to remember a widely accepted set of criteria, or checklist, to assess the quality of a user story. If the story fails to meet one of these criteria, the team may want to reword it, or even consider a rewrite (which often translates into physically tearing up the old story card and writing a new one).

A good user story should be:

  • “I” ndependent (of all others)
  • “N” egotiable (not a specific contract for features)
  • “V” aluable (or vertical)
  • “E” stimable (to a good approximation)
  • “S” mall (so as to fit within an iteration)
  • “T” estable (in principle, even if there isn’t a test for it yet)

Origins

  • 2003: the INVEST cheklist for quickly evaluating user stories originates in an article by Bill Wake, which also repurposed the acronym SMART (Specific, Measurable, Achievable, Relevant, Time-boxed) for tasks resulting from the technical decomposition of user stories.
  • 2004: the INVEST acronym is among the techniques recommended in Mike Cohn’s “User Stories applied“, which discusses it at length in Chapter 2.

Reference: http://guide.agilealliance.org/guide/invest.html#sthash.pae6mRLS.dpuf

[Ref] Developing Adaptive Leaders for Turbulent Times: The Michigan Model of Leadership

May 9, 2013 • LEADERSHIP, Leadership Development, MBAs & Executive Education

In complex and dynamic times, the Michigan Model of Leadership enables leaders to recognise and effectively manage competing tensions in organisational life. Leaders who utilise the process of Mindful Engagement learn to balance these tensions and make an impact in a world where there are no easy answers. We need leaders with empathy, drive, integrity, and courage – across society and throughout organisational hierarchies – whose core purpose is to make a positive difference in the lives of others.

Our generation has been witness to revolutionary advancements in industrial and information technology. Yet, modern organisations face challenges that are unprecedented in complexity and scale. The globalisation of international trade is creating more complex flows of people, goods, funds, and technology across national and political boundaries. Economic institutions that were historically independent are now part of a global ecosystem that, upon its collapse in 2008-2009, erased $14.5 trillion, or 33 per cent, of the value of the world’s companies in only 6 months. Furthermore, the addition of 80 million people each year to an already overcrowded planet is exacerbating the problems of pollution, desertification, underemployment, epidemics, and famine. Two billion people lack access to clean water, 80% of people live on $10 or less per day, only 53% of students in U.S. cities graduate high school, and climate change threatens to alter our way of life. These challenges will define the future of business and society, but how business and society respond to these challenges will define our generation’s legacy. Leadership has always been important, but the need for leaders who embrace this responsibility and can mobilise collective action in service of bringing about positive change has never been greater.

Historically, societies have looked to leaders as heroic figures with the charisma to charm the hearts of people and show them the way forward. Think about Martin Luther King Jr. during the civil rights movement of the 1960s in the United States, or Winston Churchill leading the United Kingdom during the Second World War. Unfortunately, Adolf Hitler had similar charismatic qualities that allowed him to capture the hearts of the Nazi party, leading to the death of millions. To address the political, economic and social challenges of our generation, we need more than charismatic figures. We need leaders whose core purpose in life is to make a positive difference in the lives of others, and who embody the courage, empathy, integrity and drive that is necessary to tackle tough challenges. Moreover, people routinely confuse leadership with formal or hierarchical power, expecting leadership only of those who hold lofty titles or positions of authority. Instead, we need leadership from all corners of society and at all levels of organisations. Today’s challenges are simply too complex and the need too immediate for people to be waiting for direction from a single leader. Leadership is not a right that is afforded to some but not others. Neither is leadership merely a position. Rather, leadership is a set of actions that anyone can engage, and we need each person to have a bias towards action with a commitment to the collective good. Finally, most people look to leaders for answers, but given the challenges we face, leaders must understand that there is rarely a single answer. Rather, there is a competing set of tensions and trade-offs that must be considered, and leadership is about making tough choices and balancing those competing tensions.

We need leaders whose core purpose in life is to make a positive difference in the lives of others, and who embody the courage, empathy, integrity and drive that is necessary to tackle tough challenges.

Our purpose in this article is to introduce a model of leadership that illustrates the core purpose, values and actions that are necessary for leading in today’s complex and dynamic world. In the 1950s, scholars from the University of Michigan — Daniel Katz, Robert Kahn, and Rensis Likert — conducted ground-breaking leadership research that spawned the Human Relations movement. Based on their research, managers were encouraged to adopt leadership styles that were less job-oriented and more employee-oriented by showing consideration for the needs of employees and enabling their participation in organisational decisions. What may sound obvious today was revolutionary in the 1950s, at which point leadership was mostly about providing structure and ensuring jobs were completed within specification. In this article, we hope to stand on the shoulders of Katz, Kahn and Likert (and others) to introduce a new way of thinking about leadership as a means to positive change in business and society. This new model — called the Michigan Model of Leadership — brings to the foreground the core purpose of making a positive impact on business and society, and articulates the values and actions that are needed to balance tensions between stability and change, and internal versus external stakeholders. After introducing the model, we identify strategies and practices for developing responsible, purpose-driven leaders in your organisation.

 

The Michigan Model of Leadership

The Michigan Model of Leadership (MMoL) explains how people can lead positive change in their lives, teams, organisations, and society. The MMoL is deeply embedded in the leadership research conducted by many prominent scholars across an array of organisations, market sectors and national boundaries.

 

 

 

 

 

 

 

 

 

 

 

 

To be clear, we make several assumptions about leadership in the 21stcentury. First, leadership is not defined as a position or title. Instead, it is a set of actions that anyone can engage in regardless of where they sit in an organisational hierarchy. As Robert Quinn (University of Michigan) describes in his research on the fundamental state of leadership, at any time, each of us can choose to be and act as a leader. Second, effective leaders do not lead by commanding compliance of others. Instead, effective leaders empower, challenge, and support others to accomplish shared goals. In this sense, leadership is not something you do to people, but rather is about how you work through other people to enable excellence. Third, effective leaders are acutely aware of their personal strengths and how to leverage those strengths to bring out the best in themselves and others. No leader is perfect. All leaders have weaknesses, but the effective ones understand how to complement their weaknesses and leverage their strengths to enable their own and others’ best selves. These assumptions are important because they make leadership accessible to people young and old, with power and without it. Leadership is a choice, and all of us can choose to lead.

At the centre of the MMoL is a core purpose: to make a positive difference in the world. What do we mean by positive difference? It is about impact and legacy — leaving your team, organisation, or even the world a better place than you found it. Researchers such as Adam Grant (University of Pennsylvania) have shown that focusing people on the impact of their work — for example, the positive impact on customers — is not only motivating and inspiring, but it also results in sustainable performance improvement. We are teaching leaders to visualize the impact of their work, use that positive impact as a calling to mobilise their teams, and ultimately achieve greater performance by embracing as their own purpose to make a positive difference in the world.

What do we mean by positive difference? It is about impact and legacy — leaving your team, organisation, or even the world a better place than you found it.

Surrounding this core purpose — what we refer to as the positive core — is a set of values describing how the mission is achieved. Our research shows that the most effective leaders (1) are empathetic and committed to seeing the world through others’ eyes; (2) are driven and routinely stretch to achieve challenging goals; (3) have integrity and are committed to doing the right thing even if it is not the popular thing; and finally (4) are courageous and consider risk and failure to be necessary ingredients for innovation. These values form a strong foundation for action and serve as guideposts for leaders as they work to make a positive difference in the world.

With the core purpose and values as its foundation, the MMoL then describes the leadership actions that are necessary for thriving in today’s global, dynamic and complex environments. Leadership is not only about painting inspirational visions, or structuring organisational processes for execution, or fostering collaboration and innovation. All of these actions are important, but to be effective, leaders must balance a set of competing forces. Leaders must simultaneously balance the stability required for execution with the change required for innovation. Leaders must balance the need for internal collaboration and community with external performance pressures from outside the team. Building on research by Robert Quinn and Kim Cameron (University of Michigan), we have identified four leadership archetypes that embody these competing tensions. Each archetype has inherent strengths and weaknesses. Only by juxtaposing and managing the competing tensions can leaders create sustained effectiveness over time.

Too much emphasis on innovation and change can produce inefficiencies or even organisational chaos that keeps the organisation from implementing new ideas.

Robust Results (blue) represents the actions that leaders engage in to foster competition, perform under pressure, and deliver short-term results. This archetype is often in direct tension with Collaborative Communities (yellow), which represents the actions involved in building high-quality relationships, empowering people, and cultivating trust and cohesion within teams. In many organisations, competition and an emphasis on short-term performance undermine collaboration and the importance of community. Yet, in other organisations, too much of an emphasis on harmony within the community produces a happy yet under-performing culture where people are unwilling to challenge each other in service of achieving higher performance.

Strategic Structures (red) represents the actions that leaders engage in to establish accountability, ensure reliable processes, and optimize efficiency. This archetype is often in direct contrast with Creative Change (green), which represents the actions required to enable change, inspire innovation and co-create new opportunities. In many organisations, an over-emphasis on structure and process can root out innovation, but at the same time, too much emphasis on innovation and change can produce inefficiencies or even organisational chaos that keeps the organisation from implementing new ideas.

Unlike traditional models of leadership that prescribe a menu of leadership behaviours, the MMoL illustrates how well-intended leadership behaviours can solve one problem while introducing a new problem. Consider the contrast between Steve Jobs, the legendary founder of Apple, and current Apple CEO Tim Cook. Jobs, strong in the green Creative Change quadrant, was a prolific visionary with numerous path-breaking products to his name. But he neglected key issues regarding Apple’s supply chain (witness the repeated problems with Apple’s Chinese suppliers). Cook, in contrast, lacks the brilliant mind of a designer, but he brings important strengths in the red Strategic Structures quadrant. He streamlined Apple’s supply chain, reduced inventory levels and increased margins while building confidence in the integrity of suppliers. The implication for leadership development is profound. Every person has a unique set of strengths, but in line with these competing tensions, those strengths will inevitably introduce a unique set of weaknesses that can undermine sustainable performance. It is a rare person who can perform all of these leadership functions well. What we need are leaders who not only recognize the competing tensions but also understand that their role as a leader is not to resolve the tension. Rather, leadership is about helping the organisation dynamically manage these paradoxes.

Building leaders with the cognitive and behavioural complexity of the Michigan Model of Leadership is difficult. In this next section, we introduce our approach — called Mindful Engagement — to developing leaders who learn from experience how to navigate the choices and trade-offs required to thrive in today’s complex and dynamic environment.

 

Mindful Engagement: A Process for Developing Leaders Who Thrive in Complex Environments

Drawing from research in for-profit companies and governmental agencies around the world, with Susan Ashford (University of Michigan), we developed an approach to leadership development called Mindful Engagement. This approach is appropriate for developing leaders who thrive in complex environments where there is no single answer and the primary source of learning is experience. The process of Mindful Engagement is based on three basic principles: (1) Readying for Growth, (2) Taking Action to Learn, and (3) Reflecting to Retain.

 

Readying for Growth
Readying for growth is about preparing oneself to learn in complex, dynamic environments. It includes three specific steps: (1) building an awareness of strengths in context, (2) identifying specific, learning goals, and (3) developing a learning mind-set.

Leaders must be aware of and understand how to leverage their own strengths. To build this awareness, we use a series of strengths-based assessments and exercises such as the Reflected Best Self (http://www.centerforpos.org/the-center/teaching-and-practice-materials/teaching-tools/reflected-best-self-exercise/). Best-self stories help individuals discover their strengths and realise their own potential and possibility as leaders. At the same time, leaders must understand that too much emphasis on any particular strength can create an opposing and countervailing force. For example, we are currently coaching an executive who has insatiable drive and an unparalleled commitment to results, but his singular focus on results is reducing cohesion in his senior management team. In complex and turbulent environments, leaders must find a way to leverage their strengths while making sure those strengths do not escalate to become the singular focus of their leadership. For many, this process is difficult because their strengths are exactly the reason they have been so successful. To address this mental hurdle, in our assessments, we not only identify individuals’ strengths but also provide real-life examples that offer insight into the potential risks and trade-offs associated with those strengths. We also routinely pair leaders with contrasting strengths to help them develop an appreciation for the risks of their own leadership style.

The second step is the development of specific learning goals. Clearly, if someone is strong in the red Reliable Results quadrant, a natural learning goal will be to learn the core skills in a different quadrant, maybe the green Creative Change quadrant. But we emphasise a different approach. We ask leaders to commit to learning goals that emphasize, not a particular quadrant, but rather goals focused on learning how to navigate the tensions and trade-offs among the four MMoL quadrants. Learning does not happen within quadrants — learning occurs as leaders focus on and navigate the tensions across quadrants. A recent example comes from an executive who focused her learning goal on stakeholder analysis as a way to understand the distinctive and sometimes conflicting needs and concerns of different stakeholders.

We ask leaders to commit to learning goals that emphasize, not a particular quadrant, but rather goals focused on learning how to navigate the tensions and trade-offs among the four MMoL quadrants.

The third step is to develop a learning mind-set. Carol Dweck (Stanford University) suggests that people either have a performance mind-set (focused on achievement focused on proving yourself) or a learning mind-set (focused on the belief that everyone can change and grow through experience). A performance mind-set values perfection or looking smart. A learning mind-set values experimentation and pushing the boundaries of our comfort zones. In a world where competing forces and trade-offs are the norm, perfection is a myth and thus a performance mind-set impedes leader development. A learning mind-set, in contrast, encourages leaders to get out of their comfort zone and trying new things. Mistakes in today’s complex world are inevitable. The challenge is to make sure you and your team learn from the mistake, and never make the same mistake twice.

 

Taking Action to Learn
Taking action to learn is about transforming the leader into his or her own R&D lab, where the leader is proactively experimenting with new ways of leading and taking steps to learn from those experiments. It is “skunk works” for proactive, self-directed leader development. To motivate taking action to learn, follow these steps:

First, leaders need to see, feel and experience the competing forces inherent in the MMoL. High-impact experiences are high-stakes (blue quadrant) and require individuals to organise diverse groups of people with limited time and resources (yellow and red quadrants) in service of facilitating innovation and change (green quadrant). At the Ross School of Business, for example, we created the Ross Impact Challenge where 48 student teams have six days to develop a new, for-profit venture that creates economic and social value in Detroit, MI. The teams are composed of 500 people from 36 countries, granted limited time and resources, and challenged to create real impact that is visible in the Detroit community. To excel, the teams must navigate the need for innovation with the need for structure, and the need for team cohesion with a need for results. As individuals work to transcend above the competing tensions rather than compromising amongst the competing tensions, deep learning occurs.

Second, taking action for learning requires that leaders commit to personal experimentation. At Ross, we encourage our students to see each and every experience, no matter how big or small, as an opportunity to experiment with new ways of leading. Recognising that experimentation will sometimes result in failure and mistakes — think about a pharmaceutical firm experimenting with new drug possibilities — we encourage leaders to commit to multiple, small experiments and to fail fast and early. Of course, the organisational culture and reward systems must allow and even support failure when that failure is in service of learning.

Third, leaders must commit to a set of actions focused on seekingfeedback. Learning only occurs when leaders have deep insight into how their actions affect, positively and negatively, the willingness and ability of others to achieve organisational goals. The problem is that most organisations provide too little feedback, or feedback that is not constructive for learning how to lead in complex, dynamic environments. Rather than trying to change the feedback system, we find that a more effective point of intervention is teaching people how to proactively seek feedback that leads to deep insight and personal change. Basic principles include (a) create a routine question or prompt for feedback such as “What input can you give me on…?”; (b) seek feedback as close to the event in question as possible; (c) make it routine and part of your “style”; and (d) seek input from people besides your supervisor or subordinate, such as your customer or peers.

 

Reflecting to Retain
Reflecting to retain is about practices that enable people to capture and apply the lessons of experience for self-improvement. The roadblock to learning for most people is themselves — the psychological biases that create excuses, flawed attributions, or blinders that get in the way of learning from experience. To address these challenges, we developed and validated a structured reflection process that attacks the biases and enables people to learn in complex, dynamic environments. Most people and organisations avoid reflection altogether, focusing instead on the next task or the next emergency without giving much thought to the past. Even more problematic is that, according to our research, the typical reflection conversation (“What happened? How did it go? What did we learn?”) does not foster learning. Drawing from the military’s after-event review procedure, we develop a new structured process for reflection. The process asks leaders to: (a) describe the experience; (b) explain their reactions to the experience; (c) discuss “what if” scenarios that test alternative explanations for their performance; (d) identify insights about new behaviours that would improve performance; and (e) commit to at least two behaviour changes and specific milestones for making those behaviour changes. We have begun using this structured reflection process to build learning communities of peers where they routinely discuss their experiences, test assumptions about their own performance, and help each other identify insights and actions steps that will enable positive behaviour change in the future. The holy grail for most organisations is building a learning culture where individuals commit not only to their own personal growth but also the personal growth of their colleagues. Our research shows that building structured reflection practices into the normal course of work is one way of building a learning organisation that cultivates leaders who can thrive in complex, dynamic environments.

The holy grail for most organisations is building a learning culture where individuals commit not only to their own personal growth but also the personal growth of their colleagues.

Our world is filled with challenges. More than ever before, we need leaders who commit to living a life of mindful engagement in reach of their best selves. We need leaders who understand how to leverage the competing values inherent to business, who elevate society to higher ideals and standards. Finally, we need leaders with empathy, drive, integrity, and courage – across society and throughout organisational hierarchies – whose core purpose is to make a positive difference in the lives of others.

Are you that kind of leader?

About the Authors
D. Scott DeRue
is a management professor at the University of Michigan’s Stephen M. Ross School of Business. Reported by CNN/Money to be one of the top 40 business school professors under the age of 40, Scott’s teaching and research focus on how leaders and teams learn, adapt, and develop in complex and dynamic environments. (dsderue@umich.edu)

Gretchen Spreitzer
is a management professor at the University of Michigan’s Stephen M. Ross School of Business.  She is the author of four books on leadership and is a thought leader in the new field of Positive Organisations.  Her research focuses on employee empowerment and leadership development, particularly within a context of organisational change and decline. (spreitze@umich.edu)

Brian Flanagan is managing director of the Ross Leadership Initiative at the University of Michigan’s Stephen M. Ross School of Business. His work applies cutting-edge leadership research to development programs for students. He is interested in developing leaders who mobilize the highest potential in people, organisations, and society. (btflan@umich.edu)

Benjamin Allen is former assistant director of the Ross Leadership Initiative (RLI) at the University of Michigan’s Stephen M. Ross School of Business and current talent management specialist at Chrysler, LLC. During his tenure at RLI, Ben developed, planned, and executed leadership programs for students. He seeks to maximize the potential impact of all leaders and organisations. (BMA15@chrysler.com)