Prefered language

DIGITAL TRANSFORMATION LEADERSHIP PRACTICE - DIGITIZE01

Leading in a disrupted world is a new game — and it is changing the way we live and work. Continuous automation cycles and the digital disruption of business models pose major new challenges for organizations and their leaders as they seek to transform their strategies for a digital age. 

Digital transformation is the integration of digital technology into all areas of a business, fundamentally changing how you operate and deliver value to customers.

It is also a cultural change that requires organizations to continually challenge the status quo, experiment, and get comfortable with failure. Digital transformation is imperative for all businesses, from the small to the enterprise. Improving customer experience has become a crucial goal of digital transformation.

 

1. Overview

The rise of digital technologies has accelerated business disruptions in any industry, generating huge opportunities driven by innovation.

Leading digital transformation requires a good understanding of the disruptive technologies but also abilities to manage change and uncertainty generated by the digital revolution itself.

This is not just about implementing technology and innovation, but also making the needed changes in the organization to drive transformation through new strategic initiatives. To achieve this goal, corporations need leaders able to shape the digital future with interdisciplinary skills such as:

  • Business strategy
  • Organization change management
  • Innovation-based opportunities

 

Digital leadership demands living by the enduring principles that make for success, and augmenting them with new qualities that enable speed, flexibility, risk-taking, an obsession with customers and new levels of communication inside the organization.
 

Before we talk about digital leadership or transformation, let’s consider what we mean by digital.

We often think about digital in terms of social media, websites, apps and digital marketing. It’s no wonder we start here. These are the platforms and tools that engage audiences and funders with what we do and who we are as artists, organisations, and leaders.

Another starting point is thinking about digital in terms of innovation, of cutting-edge technologies that can transform the art we make and how it is shared. It is hardly surprising that cultural leaders are fascinated by the potential of the latest technology to imagine new opportunities for art and heritage experiences. Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), for example, offer a toolbox rich in creative possibilities.

Digital is this, but not only this. Limiting ourselves to thinking about digital in terms of platforms or innovation alone holds back our organisations, our audiences, and our creative practice.

The new reality of digital, described so well by Julie Dodd, is the result of a scale of change “in how people choose to communicate, watch TV, learn, bank, shop and organise their lives” that has been likened to the industrial revolution.

The definition I’ve found most helpful in recognising this new reality for leaders and organisations is that offered by Tom Loosemore, who defines digital as “Applying the culture, practices, processes & technologies of the Internet-era to respond to people’s raised expectations.”
 

2. Skills Set for Digital Transformation Leadership

2.1 Digital Leadership and Transformation

With the changes that people are making in how they communicate and organise their lives, leaders and organisations in the cultural sector need to keep adapting to be effective in the Internet-era.

So, what does digital leadership look like? What skills and approaches are needed?

The great people at Dot Everyone, with their mission of making the UK’s leaders digitally literate, suggest that “being a leader in the digital age means understanding technology as much as you understand money, HR, or the law.” If leaders have digital understanding, they can then make ”confident, informed and effective decisions for their organisation and their users”.

Skills development is a continuous process for leaders. We can all develop our digital understanding through learning about digital trends and tools, and practicing the skills that enable us to lead well. When it comes to digital, this doesn’t mean that we all need to be technologists and coders. What we do need is enough digital understanding to recognise our skills gaps and identify who we can work with so that our projects and organisations can thrive.

Skilled digital leadership is needed to transform our organisations to be fit for the Internet-era, through a process of building new capacities, structures and ways of working.

Where to start? Being curious about our users and audiences, and open to learning what can be done differently to serve them better, has huge potential to benefit cultural organisations. Agile and Design Thinking approaches to organisational and programme development informed by user needs are being explored by cultural organisations across the country. Understanding what our users need and how our work could meet these needs can make what we do more appealing, relevant, fundable accessible, and cheaper to develop and deliver.

By learning from data and insights about our users and their needs, we can transform cultural leadership and cultural organisations to thrive in the Internet-era, and move away from limiting ourselves to thinking about digital in terms of platforms or innovation alone.

 

Six Characteristics of Digital Leadership

  1. Recognising that digital is not always about scale of flashy projects, it’s about transforming people and ways of working
  2. Developing digital skills across the organisation, not just within a separate department
  3. Instead of a digital strategy, integrating digital processes and technologies to serve and shape business and artistic strategies
  4. Providing leaders with a mandate and budget to test and embed digital technology and agile ways of working
  5. Starting all programmes and projects with use research and user needs, iterating what you do and how you do it in response to feedback
  6. Inspiring teams and boards about the benefits of digital transformation with tangible proof of concept, even if the successful experiments are small in scale.
     

 

2.2 Agile Software Development

Agile software development is an approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer/end user.

Agile software development is an approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer(s)/end user(s). It advocates adaptive planning, evolutionary development, early delivery, and continual improvement, and it encourages rapid and flexible response to change.

The term agile (sometimes written Agile) was popularized, in this context, by the Manifesto for Agile Software Development. The values and principles espoused in this manifesto were derived from and underpin a broad range of software development frameworks, including Scrum and Kanban.

There is significant anecdotal evidence that adopting agile practices and values improves the agility of software professionals, teams and organizations; however, some empirical studies have found no scientific evidence.
 

2.2.1 The Manifesto for Agile Software Development

2.2.1.1 Agile software development values

Based on their combined experience of developing software and helping others do that, the seventeen signatories to the manifesto proclaimed that they value:

  1. Individuals and Interactions over processes and tools
  2. Working Software over comprehensive documentation
  3. Customer Collaboration over contract negotiation
  4. Responding to Change over following a plan 

 

2.2.1.2 Agile software development principles

The Manifesto for Agile Software Development is based on twelve principles:

  1. Customer satisfaction by early and continuous delivery of valuable software.
  2. Welcome changing requirements, even in late development.
  3. Deliver working software frequently (weeks rather than months)
  4. Close, daily cooperation between business people and developers
  5. Projects are built around motivated individuals, who should be trusted
  6. Face-to-face conversation is the best form of communication (co-location)
  7. Working software is the primary measure of progress
  8. Sustainable development, able to maintain a constant pace
  9. Continuous attention to technical excellence and good design
  10. Simplicity—the art of maximizing the amount of work not done—is essential
  11. Best architectures, requirements, and designs emerge from self-organizing teams
  12. Regularly, the team reflects on how to become more effective, and adjusts accordingly

 

2.3 Digital Marketing

Digital marketing is the marketing of products or services using digital technologies, mainly on the Internet, but also including mobile phones, display advertising, and any other digital medium.

Digital marketing's development since the 1990s and 2000s has changed the way brands and businesses use technology for marketing. As digital platforms are increasingly incorporated into marketing plans and everyday life, and as people use digital devices instead of visiting physical shops, digital marketing campaigns are becoming more prevalent and efficient.

Digital marketing methods such as search engine optimization (SEO), search engine marketing (SEM), content marketing, influencer marketing, content automation, campaign marketing, data-driven marketing, e-commerce marketing, social media marketing, social media optimization, e-mail direct marketing, Display advertising, e–books, and optical disks and games are becoming more common in our advancing technology. In fact, digital marketing now extends to non-Internet channels that provide digital media, such as mobile phones (SMS and MMS), callback, and on-hold mobile ring tones. In essence, this extension to non-Internet channels helps to differentiate digital marketing from online marketing, another catch-all term for the marketing methods mentioned above, which strictly occur online. 
 

2.4 Modern Finances & Budgeting

Budgeting—Modern finance goes beyond manual spreadsheet-based budgeting by delivering connected and integrated enterprise planning budgeting which offers greater participation and increased accuracy.

 

2.4.1 Why Modern Finance is Key to Your Business Success

Regardless of your finance systems challenge—infrastructure, processes or resources—there is a clear path to modern finance. 

Finance systems have evolved from foundational bookkeeping concepts developed by Venice’s Luca Pacioli in 1494. Pacioli’s published work defined double entry accounting, and he included concepts and definitions for ledgers, assets, receivables, inventories, liabilities, capital, income and expenses. Perhaps because he was also a religious cleric, he discussed compliance and ethics for finance professionals, including his advice to not go to sleep each day until debits equaled credits.

The principles Pacioli developed over 500 years ago—two years after Christopher Columbus first sailed to the western hemisphere—defined the legacy transactional financial systems developed in the last century. It was a great match that married a 15th century book with 20th century technology.

1. Finance in the 21st Century. Then the landscape changed dramatically and suddenly early in this century. Theroughly parallel agility and efficiency curves began todiverge exponentially.

The rate and pace of change involving factors such as innovation (particularly in technology), regulations, and competition (especially those with new and disruptive business models) exceeded the IT infrastructure, business processes and resources related to a company’s ability to respond. The gap was small at first, but over the last decade it dramatically increased. It is now the new normal

This gap creates its own fundamental problem: it drives company leaders to rely on “gut feel” and instinct to make business decisions rather than using rapid, real-time access to accurate data. It is a terrible outcome as this practice delays decision making and introduces errors. The net result erodes operational and management confidence as delays and errors compound negative choices, negatively impact sales, and hurt employee morale.

2. Enter IFRS 15. Add in complex global issues with changing international norms—like IFRS 15—and the next resultis a collection of variables that impact every aspect ofyour company, especially if you are selling and workingacross borders.

3. From Finance Challenges to Opportunities. Since finance is a part of every organization—regardless of size, location and industry—this challenge of keeping up with systemic change becomes an opportunity to boldly leverage disruption rather than hunkering down in fear. Businesses that accept this challenge embrace new and leading technologies, standardize on best practices and invest in the next generation of financial software and their employees.Regardless of your finance systems challenge—infrastructure, processes or resources—there is a clear path to modernfinance that is well defined and ready to support your company’s success.

2.4.1.1 Top 10 Reasons toDeploy Modern Finance

When one or more of these conditions is present in your business:

1. Proliferating non-integrated systems are generating more disparatedata. New technologies, including mobile devices and Internet of Things objects, are generating data that eventually impacts finance.

2. Data imports and integrations are increasing. With new data types and volumes both increasing, more external imports and new integrations are crafted through APIs, business services, and other vehicles

3. System maintenance costs are increasing and they are harder to keep current. The hardware, software licenses and talent, is getting more expensive every year with few if any noticeable enhancements.

4. Reporting is getting harder and more real-time reports arerequested. Despite all the issues, more reports—in real-time,nonetheless—are requested.

5. Spreadsheets are proliferating. With increases in data, legacy system issues, and growing reporting demands, spreadsheets gain additional prominence to complete finance analysis and analytic.

6. Rapid and global company growth into new markets and new geographies. Simultaneously, the business decides to enter new venues and naturally expects finance to be ready, no questions asked.

7. New and changing local and international compliance requirements. As more financial transactions span the globe, more entities are promulgating newer and tougher oversight and disclosure.

8. It is time for an IPO (or other major business model evolution). Finance is asked to support rapidly evolving business model plans like going public in an IPO.

9. The population of next generation employees is increasing. Each month more next-gen employees arrive to begin their careers. Armed with experiences and education built around a mobile, social world.

10. Close processes are longer and more complex, while desire grows for continuous closes. It is today’s ultimate financial dilemma. Traditionalclose processes are getting more difficult, but management wants it to be done faster.
 

2.4.2 Capabilities of a Modern Finance System

Modern finance requires an integrated and modular set of applications that lets you start anywhere and achieve fast time to value. This approach delivers integrated data utilizing a common schema which leverages extensive historical investments in legacy Oracle and non-Oracle transactional systems. Three key financial capabilities form the foundation of any modern finance system:
 

1. Management Insight.Enterprise Performance Management solutions provide you with the ability to align strategy with plans and execution, providing management insight needed to guide your business forward with:

  • Intuitive web interface with full integrations to popular office productivity tools
  • Virtually zero training with included online help and tutorials
  • Powerful and scalable modeling engines
  • Secure, role-based collaboration for account reconciliation and narrative reporting
  • Out-of-the-box support for best practices including rolling forecasts and multi-currency

2. Compliance and Control.Risk and Compliance Management strengthens internal controls by optimizing controls within and outside your financial processes. Capabilities include:

  • Centralized and secure repository of risks and controls
  • Automated assessments of internal controls compliance
  • Enterprise-wide visibility for confident control certifications
  • Continuously monitor payables and travel expense transactions for exceptions
  • Detect duplicate supplier invoices before paymentsare made

3. Operational Agility.Financial Management establishes a foundation for local, regional and global operational agility. This provides a digital back office to meets the needs of today’s business by incorporating:

  • End-to-end best practice business processes
  • Full multi-GAAP, multi-currency and multi-entity support
  • Embedded analytics for self-monitoring processes inwork areas
  • Secure and in-context collaboration and workflow
  • Fully integrated out-of-the-box invoice imaging

Besides extensive finance function capabilities, a modern system also requires a rich digital core that is easy to use, embraces social communications platforms, delivers embedded analytics and is mobile accessible across any sized device. It can integrate easily with embedded tools, offers enterprise-grade security and is deployed rapidly for fast time to value.

 

2.4.3 Finance Challenges and Opportunities

There is no doubt about disruption being a significant force across the globe for every business and industry. The popular phrase—you are either a disruptor or being disrupted—also applies to finance organizations as company leadership require a move from operational efficiency to operational agility as the new business paradigm. This shift is driven by three challenges common to every business regardless of geography or industry:

1. With Finance in the 21st Century, Data-Driven Decisions Become Elusive. The default tool for many companies is spreadsheets. Too often finance teams find their current systems cannot analyze their data so a collective decision is made to start a new spreadsheet. As spreadsheets proliferate and variations multiple with no control, decisions they drive become harder to reach and might introduce error or doubt.

2. Problems with Compliance Complacency. Additional scrutiny is required as new regulations and standards evolve. For example, as IFRS 15is deployed, software and systems need to be updated, staff skills updated, and processes changed. It is not a simple task. Add in a company’s growth into new markets in new geographies and the ability for accurate and meaningful compliance becomes more difficult.

3. Complex Legacy Systems and Environment. Modern activities require modern systems. Legacy on-premises software solutions carry high fixed costs coupled to large implementation and maintenance expenses heavily weighted toward cap-ex. Delayed on-premises projects have a cascading effect that further impacts moving toward data-driven decision making and improving compliance performance.

These challenges highlight why organizations move to the cloud. As legacy on-premises systems are converted to op-ex cloud solutions, finance departments embrace software that is always current, global and innovative. As the right technology for modern finance, both direct and indirect costs are reduced. Working in the cloud unleashes collaboration which in turn delivers more business value.

 

2.4.4 Building a Modern Finance Operation

Now is the time to say goodbye to process-focused finance models. Digital has blown it up. Traditional finance activities— including transaction processing, control and risk management, and reporting, analytics and forecasting—have been redesigned and reconfigured. The cloud elevates finance to the insight engine for your business using three essential elements:

1. Management Insight with Analytics Competency Centers.Analytics “gurus” do more than analyze financials. They assess product, customer, expense and project trends. Employees use self-service to explore data “in the moment” to understand the financial impact of operational events and decisions. No more relying on finance to do it for them. And no more trips to IT begging for custom reports.

2. Compliance and Control with Communications and Control Centers. These centers are focused on control, compliance, communications and risk management to consolidate the fundamentals of finance operations: statutory accounting, compliance, tax, treasury and investor relations. They are nimble, responsive and cost-effective, aligning specialized teams around streamlined work processes.For example, one telecom provider is consolidating its fragmented income, property and sales tax organizations. A digital data warehouse automates most routine tax reporting and compliance. This way tax professionals can now focus on optimizing the tax structure of the organization to better support the business strategy.

3. Operational Agility Leveraging Integrated Business Services. These teams deliver complete services to employees, customers and suppliers across functions. They bundle accounting and transaction processing typically performed by finance with tasks from other business areas. Global consulting firm Accenture estimates that by 2020 more than 80 percent of traditional finance services will be delivered by cross-functional teams working within integrated business services teams.

 

2.4.5 Example - Deploying Modern Finance with Oracle:

Together with Oracle’s other modern applications, finance is delivered in a single cloud that is enterprise-grade and ready to grow your business today. Oracle’s modern finance solution is built on a robust union of platform and applications that is fully integrated a plethora of solutions covering supply chain management, human resources, sales, marketing and customer activities.

These platform and application characteristics are:

1. Modern-Standards Delivered with Modern Best Practice

Oracle leverages a growing library of 183 published Modern Best Practice processes delivered through secure and highly scalable Oracle-owned and operated cloud platforms. With business applications that are fully integrated and connected, organizations quickly get to work with personalized—never customized—modern and up-to-date software emboldened with revolutionary reporting (including embedded analytics) delivered around intuitive and socially enabled user experiences.

2. Modern Economics Approach

There is a significant economic benefit when you modernize finance in the cloud and move from a cap-ex approach to op-ex. Oracle and its partners offer research and tools to determine the financial advantages of a cloud finance project and services.

  • Cloud Solutions Deliver Savings
  • Cloud ROI Calculator
  • Finance Self-Assessment Tool
  • Cloud Marketplace with Partner Solutions

3. Modern Business Applications

For your move to modern finance, Oracle delivers a comprehensive collection of cloud applications that leverage integrated ERP and EPM solutions to deliver real-time answers to your business questions.

  • Oracle Enterprise Performance Management (EPM) Cloud
  • Oracle Financials Cloud
  • Oracle Revenue Management Cloud
  • Oracle Risk Management Cloud
  • Oracle Accounting Hub Cloud

 

2.5 Legal Tech

Legal technology, also known as Legal Tech, refers to the use of technology and software to provide legal services. Legal Tech companies are generally startups founded with the purpose of disrupting the traditionally conservative legal market.

Legal technology traditionally referred to the application of technology and software to help law firms with practice management, document storage, billing, accounting and electronic discovery. Since 2011, Legal Tech has evolved to be associated more with technology startups disrupting the practice of law by giving people access to online software that reduces or in some cases eliminates the need to consult a lawyer, or by connecting people with lawyers more efficiently through online marketplaces and lawyer-matching websites.

The legal industry is widely seen to be conservative and traditional, with Law Technology Today noting that "in 50 years, the customer experience at most law firms has barely changed". Reasons for this include the fact law firms face weaker cost-cutting incentives than other professions (since they pass disbursements directly to their client) and are seen to be risk averse (as a minor technological error could have significant financial consequences for a client).

However, the growth of the hiring by businesses of in-house counsel and their increasing sophistication, together with the development of email, has led to clients placing increasing cost and time pressure on their lawyers. In addition, there are increasing incentives for lawyers to become technologically competent, with the American Bar Association voting in August 2012 to amend the Model Rules of Professional Conduct to require lawyers to keep abreast of "the benefits and risks associated with relevant technology", and the saturation of the market leading many lawyers to look for cutting-edge ways to compete.  The exponential growth in the volume of documents (mostly email) that must be reviewed for litigation cases has greatly accelerated the adoption of technology used in eDiscovery, with elements of machine language and artificial intelligence being incorporated and cloud-based services being adopted by law firms.[citation needed]

Investment in Legal Tech is predominantly focused in the United States with more than $254 million invested in 2014 in the United States.

Stanford Law School has started CodeX, the Center for Legal Informatics, an interdisciplinary research center, which also incubates companies started by law students and computer scientists. Some companies that have come out of the program include Lex Machina and Legal.io.
 

2.5.1 Legal Tech Key Areas

Traditional areas of Legal Tech include:

  •  Accounting
  • Billing
  • Document automation
  • Document storage
  • Electronic discovery
  • Legal research
  • Practice management


More recent areas of growth in Legal Tech focus on:

  • Providing tools or a marketplace to connect clients with lawyers
  • Providing tools for consumers and businesses to complete legal matters by themselves, obviating the need for a lawyer
  • Data and contract analytics
  • Use of legally binding digital signature, which helps verify the digital identity of each signer, maintains the chain of custody for the documents and can provide audit trails
  • Automation of legal writing or other substantive aspects of legal practice
  • Platforms for succession planning i.e Will writing, via online applications
  • Providing tools to assist with immigration document preparation in lieu of hiring a lawyer.

2.5.2 Notable Legal Tech companies


Russia

    Legal research:

  • Consultant Plus
  • Garant

United Kingdom

  • Lexoo
  • Practical Law Company

United States

Seed round:

  • Legal.io
  • LegalEase

Series A and beyond:

  • Lex Machina
  • Ravel Law
  • Rocket Lawyer
  • UpCounsel
  • Wevorce

Other legal technology companies:

  • Bloomberg Law
  • Legalzoom
  • LexisNexis
  • Recommind
  • TransPerfect
  • Westlaw

 

2.6 Disruptive Technologies

In business theory, a disruptive innovation is an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market-leading firms, products, and alliances. The term was defined and first analyzed by the American scholar Clayton M. Christensen and his collaborators beginning in 1995, and has been called the most influential business idea of the early 21st century.

Not all innovations are disruptive, even if they are revolutionary. For example, the first automobiles in the late 19th century were not a disruptive innovation, because early automobiles were expensive luxury items that did not disrupt the market for horse-drawn vehicles. The market for transportation essentially remained intact until the debut of the lower-priced Ford Model T in 1908.[5] The mass-produced automobile was a disruptive innovation, because it changed the transportation market, whereas the first thirty years of automobiles did not.

Disruptive innovations tend to be produced by outsiders and entrepreneurs in startups, rather than existing market-leading companies. The business environment of market leaders does not allow them to pursue disruptive innovations when they first arise, because they are not profitable enough at first and because their development can take scarce resources away from sustaining innovations (which are needed to compete against current competition). A disruptive process can take longer to develop than by the conventional approach and the risk associated to it is higher than the other more incremental or evolutionary forms of innovations, but once it is deployed in the market, it achieves a much faster penetration and higher degree of impact on the established markets.

Beyond business and economics disruptive innovations can also be considered to disrupt complex systems, including economic and business-related aspects.


2.6.1 History of Distruptive Technologies

The term disruptive technologies was coined by Clayton M. Christensen and introduced in his 1995 article Disruptive Technologies: Catching the Wave, which he cowrote with Joseph Bower. The article is aimed at management executives who make the funding or purchasing decisions in companies, rather than the research community. He describes the term further in his book The Innovator's Dilemma. Innovator's Dilemma explored the cases of the disk drive industry (which, with its rapid generational change, is to the study of business what fruit flies are to the study of genetics, as Christensen was advised in the 1990s[11]) and the excavating equipment industry (where hydraulic actuation slowly displaced cable-actuated movement). In his sequel with Michael E. Raynor, The Innovator's Solution, Christensen replaced the term disruptive technology with disruptive innovation because he recognized that few technologies are intrinsically disruptive or sustaining in character; rather, it is the business model that the technology enables that creates the disruptive impact. However, Christensen's evolution from a technological focus to a business-modelling focus is central to understanding the evolution of business at the market or industry level. Christensen and Mark W. Johnson, who cofounded the management consulting firm Innosight, described the dynamics of "business model innovation" in the 2008 Harvard Business Review article "Reinventing Your Business Model". The concept of disruptive technology continues a long tradition of identifying radical technical change in the study of innovation by economists, and the development of tools for its management at a firm or policy level.

The term “disruptive innovation” is misleading when it is used to refer to a product or service at one fixed point, rather than to the evolution of that product or service over time.

In the late 1990s, the automotive sector began to embrace a perspective of "constructive disruptive technology" by working with the consultant David E. O'Ryan, whereby the use of current off-the-shelf technology was integrated with newer innovation to create what he called "an unfair advantage". The process or technology change as a whole had to be "constructive" in improving the current method of manufacturing, yet disruptively impact the whole of the business case model, resulting in a significant reduction of waste, energy, materials, labor, or legacy costs to the user.

In keeping with the insight that what matters economically is the business model, not the technological sophistication itself, Christensen's theory explains why many disruptive innovations are not "advanced technologies", which a default hypothesis would lead one to expect. Rather, they are often novel combinations of existing off-the-shelf components, applied cleverly to a small, fledgling value network.

Online news site TechRepublic proposes an end using the term, and similar related terms, suggesting that it is overused jargon as of 2014.

2.6.2 Theory of Distruptive Technologies

The current theoretical understanding of disruptive innovation is different from what might be expected by default, an idea that Clayton M. Christensen called the "technology mudslide hypothesis". This is the simplistic idea that an established firm fails because it doesn't "keep up technologically" with other firms. In this hypothesis, firms are like climbers scrambling upward on crumbling footing, where it takes constant upward-climbing effort just to stay still, and any break from the effort (such as complacency born of profitability) causes a rapid downhill slide. Christensen and colleagues have shown that this simplistic hypothesis is wrong; it doesn't model reality. What they have shown is that good firms are usually aware of the innovations, but their business environment does not allow them to pursue them when they first arise, because they are not profitable enough at first and because their development can take scarce resources away from that of sustaining innovations (which are needed to compete against current competition). In Christensen's terms, a firm's existing value networks place insufficient value on the disruptive innovation to allow its pursuit by that firm. Meanwhile, start-up firms inhabit different value networks, at least until the day that their disruptive innovation is able to invade the older value network. At that time, the established firm in that network can at best only fend off the market share attack with a me-too entry, for which survival (not thriving) is the only reward.

Christensen defines a disruptive innovation as a product or service designed for a new set of customers.

Generally, disruptive innovations were technologically straightforward, consisting of off-the-shelf components put together in a product architecture that was often simpler than prior approaches. They offered less of what customers in established markets wanted and so could rarely be initially employed there. They offered a different package of attributes valued only in emerging markets remote from, and unimportant to, the mainstream. 

Christensen argues that disruptive innovations can hurt successful, well-managed companies that are responsive to their customers and have excellent research and development. These companies tend to ignore the markets most susceptible to disruptive innovations, because the markets have very tight profit margins and are too small to provide a good growth rate to an established (sizable) firm.[16] Thus, disruptive technology provides an example of an instance when the common business-world advice to "focus on the customer" (or "stay close to the customer", or "listen to the customer") can be strategically counterproductive.

While Christensen argued that disruptive innovations can hurt successful, well-managed companies, O'Ryan countered that "constructive" integration of existing, new, and forward-thinking innovation could improve the economic benefits of these same well-managed companies, once decision-making management understood the systemic benefits as a whole. 

Christensen distinguishes between "low-end disruption", which targets customers who do not need the full performance valued by customers at the high end of the market, and "new-market disruption", which targets customers who have needs that were previously unserved by existing incumbents.

"Low-end disruption" occurs when the rate at which products improve exceeds the rate at which customers can adopt the new performance. Therefore, at some point the performance of the product overshoots the needs of certain customer segments. At this point, a disruptive technology may enter the market and provide a product that has lower performance than the incumbent but that exceeds the requirements of certain segments, thereby gaining a foothold in the market.

In low-end disruption, the disruptor is focused initially on serving the least profitable customer, who is happy with a good enough product. This type of customer is not willing to pay premium for enhancements in product functionality. Once the disruptor has gained a foothold in this customer segment, it seeks to improve its profit margin. To get higher profit margins, the disruptor needs to enter the segment where the customer is willing to pay a little more for higher quality. To ensure this quality in its product, the disruptor needs to innovate. The incumbent will not do much to retain its share in a not-so-profitable segment, and will move up-market and focus on its more attractive customers. After a number of such encounters, the incumbent is squeezed into smaller markets than it was previously serving. And then, finally, the disruptive technology meets the demands of the most profitable segment and drives the established company out of the market.

"New market disruption" occurs when a product fits a new or emerging market segment that is not being served by existing incumbents in the industry.

The extrapolation of the theory to all aspects of life has been challenged,[18][19] as has the methodology of relying on selected case studies as the principal form of evidence. Jill Lepore points out that some companies identified by the theory as victims of disruption a decade or more ago, rather than being defunct, remain dominant in their industries today (including Seagate Technology, U.S. Steel, and Bucyrus). Lepore questions whether the theory has been oversold and misapplied, as if it were able to explain everything in every sphere of life, including not just business but education and public institutions.
 

2.6.2 Distruptive Technologies 101

In 2009, Milan Zeleny described high technology as disruptive technology and raised the question of what is being disrupted. The answer, according to Zeleny, is the support network of high technology.[20] For example, introducing electric cars disrupts the support network for gasoline cars (network of gas and service stations). Such disruption is fully expected and therefore effectively resisted by support net owners. In the long run, high (disruptive) technology bypasses, upgrades, or replaces the outdated support network. Questioning the concept of a disruptive technology, Haxell (2012) questions how such technologies get named and framed, pointing out that this is a positioned and retrospective act.

Technology, being a form of social relationship,[citation needed] always evolves. No technology remains fixed. Technology starts, develops, persists, mutates, stagnates, and declines, just like living organisms.[23] The evolutionary life cycle occurs in the use and development of any technology. A new high-technology core emerges and challenges existing technology support nets (TSNs), which are thus forced to coevolve with it. New versions of the core are designed and fitted into an increasingly appropriate TSN, with smaller and smaller high-technology effects. High technology becomes regular technology, with more efficient versions fitting the same support net. Finally, even the efficiency gains diminish, emphasis shifts to product tertiary attributes (appearance, style), and technology becomes TSN-preserving appropriate technology. This technological equilibrium state becomes established and fixated, resisting being interrupted by a technological mutation; then new high technology appears and the cycle is repeated.

Regarding this evolving process of technology, Christensen said: 
The technological changes that damage established companies are usually not radically new or difficult from a technological point of view. They do, however, have two important characteristics: First, they typically present a different package of performance attributes—ones that, at least at the outset, are not valued by existing customers. Second, the performance attributes that existing customers do value improve at such a rapid rate that the new technology can later invade those established markets.

The World Bank's 2019 World Development Report on The Changing Nature of Work[25] examines how technology shapes the relative demand for certain skills in labor markets and expands the reach of firms - robotics and digital technologies, for example, enable firms to automate, replacing labor with machines to become more efficient, and innovate, expanding the number of tasks and products. Joseph Bower explained the process of how disruptive technology, through its requisite support net, dramatically transforms a certain industry. 

When the technology that has the potential for revolutionizing an industry emerges, established companies typically see it as unattractive: it’s not something their mainstream customers want, and its projected profit margins aren’t sufficient to cover big-company cost structure. As a result, the new technology tends to get ignored in favor of what’s currently popular with the best customers. But then another company steps in to bring the innovation to a new market. Once the disruptive technology becomes established there, smaller-scale innovation rapidly raise the technology’s performance on attributes that mainstream customers’ value.

For example, the automobile was high technology with respect to the horse carriage; however, it evolved into technology and finally into appropriate technology with a stable, unchanging TSN. The main high-technology advance in the offing is some form of electric car—whether the energy source is the sun, hydrogen, water, air pressure, or traditional charging outlet. Electric cars preceded the gasoline automobile by many decades and are now returning to replace the traditional gasoline automobile. The printing press was a development that changed the way that information was stored, transmitted, and replicated. This allowed empowered authors but it also promoted censorship and information overload in writing technology.

Milan Zeleny described the above phenomenon. He also wrote that:

Implementing high technology is often resisted. This resistance is well understood on the part of active participants in the requisite TSN. The electric car will be resisted by gas-station operators in the same way automated teller machines (ATMs) were resisted by bank tellers and automobiles by horsewhip makers. Technology does not qualitatively restructure the TSN and therefore will not be resisted and never has been resisted. Middle management resists business process reengineering because BPR represents a direct assault on the support net (coordinative hierarchy) they thrive on. Teamwork and multi-functionality is resisted by those whose TSN provides the comfort of narrow specialization and command-driven work.

Social media could be considered a disruptive innovation within sports. More specifically, the way that news in sports circulates nowadays versus the pre-internet era where sports news was mainly on T.V., radio, and newspapers. Social media has created a new market for sports that was not around before in the sense that players and fans have instant access to information related to sports.

 

2.6.3 High-technology Effects of Distruptive Technologies

High technology is a technology core that changes the very architecture (structure and organization) of the components of the technology support net. High technology therefore transforms the qualitative nature of the TSN's tasks and their relations, as well as their requisite physical, energy, and information flows. It also affects the skills required, the roles played, and the styles of management and coordination—the organizational culture itself.

This kind of technology core is different from regular technology core, which preserves the qualitative nature of flows and the structure of the support and only allows users to perform the same tasks in the same way, but faster, more reliably, in larger quantities, or more efficiently. It is also different from appropriate technology core, which preserves the TSN itself with the purpose of technology implementation and allows users to do the same thing in the same way at comparable levels of efficiency, instead of improving the efficiency of performance.

As for the difference between high technology and low technology, Milan Zeleny once said:

The effects of high technology always breaks the direct comparability by changing the system itself, therefore requiring new measures and new assessments of its productivity. High technology cannot be compared and evaluated with the existing technology purely on the basis of cost, net present value or return on investment. Only within an unchanging and relatively stable TSN would such direct financial comparability be meaningful. For example, you can directly compare a manual typewriter with an electric typewriter, but not a typewriter with a word processor. Therein lies the management challenge of high technology. 

However, not all modern technologies are high technologies. They have to be used as such, function as such, and be embedded in their requisite TSNs. They have to empower the individual because only through the individual can they empower knowledge. Not all information technologies have integrative effects. Some information systems are still designed to improve the traditional hierarchy of command and thus preserve and entrench the existing TSN. The administrative model of management, for instance, further aggravates the division of task and labor, further specializes knowledge, separates management from workers, and concentrates information and knowledge in centers.

As knowledge surpasses capital, labor, and raw materials as the dominant economic resource, technologies are also starting to reflect this shift. Technologies are rapidly shifting from centralized hierarchies to distributed networks. Nowadays knowledge does not reside in a super-mind, super-book, or super-database, but in a complex relational pattern of networks brought forth to coordinate human action. 
 

2.6.4 Example of disruption

In the practical world, the popularization of personal computers illustrates how knowledge contributes to the ongoing technology innovation. The original centralized concept (one computer, many persons) is a knowledge-defying idea of the prehistory of computing, and its inadequacies and failures have become clearly apparent. The era of personal computing brought powerful computers "on every desk" (one person, one computer). This short transitional period was necessary for getting used to the new computing environment, but was inadequate from the vantage point of producing knowledge. Adequate knowledge creation and management come mainly from networking and distributed computing (one person, many computers). Each person's computer must form an access point to the entire computing landscape or ecology through the Internet of other computers, databases, and mainframes, as well as production, distribution, and retailing facilities, and the like. For the first time, technology empowers individuals rather than external hierarchies. It transfers influence and power where it optimally belongs: at the loci of the useful knowledge. Even though hierarchies and bureaucracies do not innovate, free and empowered individuals do; knowledge, innovation, spontaneity, and self-reliance are becoming increasingly valued and promoted.

Amazon Alexa, Uber, Airbnb are some other examples of disruption. 
 

2.7 Innovation

Innovation in its modern meaning is "a new idea, creative thoughts, new imaginations in form of device or method". Innovation is often also viewed as the application of better solutions that meet new requirements, unarticulated needs, or existing market needs. Such innovation takes place through the provision of more-effective products, processes, services, technologies, or business models that are made available to markets, governments and society. An innovation is something original and more effective and, as a consequence, new, that "breaks into" the market or society. Innovation is related to, but not the same as, invention, as innovation is more apt to involve the practical implementation of an invention (ie new / improved ability) to make a meaningful impact in the market or society, and not all innovations require an invention. Innovation often[quantify] manifests itself via the engineering process, when the problem being solved is of a technical or scientific nature. The opposite of innovation is exnovation.

While a novel device is often described as an innovation, in economics, management science, and other fields of practice and analysis, innovation is generally considered to be the result of a process that brings together various novel ideas in such a way that they affect society. In industrial economics, innovations are created and found[by whom?] empirically from services to meet growing consumer demand.

Innovation also has an older historical meaning which is quite different. From the 1400s through the 1600s, prior to early American settlement, the concept of "innovation" was pejorative. It was an early modern synonym for rebellion, revolt and heresy.

 

2.7.1 Definition of Innovation

A 2014 survey of literature on innovation found over 40 definitions. In an industrial survey of how the software industry defined innovation, the following definition given by Crossan and Apaydin was considered to be the most complete, which builds on the Organisation for Economic Co-operation and Development (OECD) manual's definition:

Innovation is production or adoption, assimilation, and exploitation of a value-added novelty in economic and social spheres; renewal and enlargement of products, services, and markets; development of new methods of production; and the establishment of new management systems. It is both a process and an outcome. 

According to Kanter, innovation includes original invention and creative use and defines innovation as a generation, admission and realization of new ideas, products, services and processes.

Two main dimensions of innovation were degree of novelty (patent) (i.e. whether an innovation is new to the firm, new to the market, new to the industry, or new to the world) and kind of innovation (i.e. whether it is processor product-service system innovation). In recent organizational scholarship, researchers of workplaces have also distinguished innovation to be separate from creativity, by providing an updated definition of these two related but distinct constructs:

Workplace creativity concerns the cognitive and behavioral processes applied when attempting to generate novel ideas. Workplace innovation concerns the processes applied when attempting to implement new ideas. Specifically, innovation involves some combination of problem/opportunity identification, the introduction, adoption or modification of new ideas germane to organizational needs, the promotion of these ideas, and the practical implementation of these ideas.
 

2.7.2 Inter-disciplinary Views of Innovation

2.7.2.1 Business and economics

In business and in economics, innovation can become a catalyst for growth. With rapid advancements in transportation and communications over the past few decades, the old-world concepts of factor endowments and comparative advantage which focused on an area's unique inputs are outmoded for today's global economy. Economist Joseph Schumpeter (1883–1950), who contributed greatly to the study of innovation economics, argued that industries must incessantly revolutionize the economic structure from within, that is innovate with better or more effective processes and products, as well as market distribution, such as the connection from the craft shop to factory. He famously asserted that "creative destruction is the essential fact about capitalism".[16] Entrepreneurs continuously look for better ways to satisfy their consumer base with improved quality, durability, service and price which come to fruition in innovation with advanced technologies and organizational strategies.

A prime example of innovation involved the explosive boom of Silicon Valley startups out of the Stanford Industrial Park. In 1957, dissatisfied employees of Shockley Semiconductor, the company of Nobel laureate and co-inventor of the transistor William Shockley, left to form an independent firm, Fairchild Semiconductor. After several years, Fairchild developed into a formidable presence in the sector. Eventually, these founders left to start their own companies based on their own, unique, latest ideas, and then leading employees started their own firms. Over the next 20 years, this snowball process launched the momentous startup-company explosion of information-technology firms. Essentially, Silicon Valley began as 65 new enterprises born out of Shockley's eight former employees. Since then, hubs of innovation have sprung up globally with similar metonyms, including Silicon Alley encompassing New York City.

Another example involves business incubators – a phenomenon nurtured by governments around the world, close to knowledge clusters (mostly research-based) like universities or other Government Excellence Centres – which aim primarily to channel generated knowledge to applied innovation outcomes in order to stimulate regional or national economic growth.

2.7.2.2 Organizations

In the organizational context, innovation may be linked to positive changes in efficiency, productivity, quality, competitiveness, and market share. However, recent research findings highlight the complementary role of organizational culture in enabling organizations to translate innovative activity into tangible performance improvements. Organizations can also improve profits and performance by providing work groups opportunities and resources to innovate, in addition to employee's core job tasks. Peter Drucker wrote:

Innovation is the specific function of entrepreneurship, whether in an existing business, a public service institution, or a new venture started by a lone individual in the family kitchen. It is the means by which the entrepreneur either creates new wealth-producing resources or endows existing resources with enhanced potential for creating wealth. –Drucker 

According to Clayton Christensen, disruptive innovation is the key to future success in business. The organization requires a proper structure in order to retain competitive advantage. It is necessary to create and nurture an environment of innovation. Executives and managers need to break away from traditional ways of thinking and use change to their advantage. It is a time of risk but even greater opportunity. The world of work is changing with the increase in the use of technology and both companies and businesses are becoming increasingly competitive. Companies will have to downsize or reengineer their operations to remain competitive. This will affect employment as businesses will be forced to reduce the number of people employed while accomplishing the same amount of work if not more.

While disruptive innovation will typically "attack a traditional business model with a lower-cost solution and overtake incumbent firms quickly,"[26] foundational innovation is slower, and typically has the potential to create new foundations for global technology systems over the longer term. Foundational innovation tends to transform business operating models as entirely new business models emerge over many years, with gradual and steady adoption of the innovation leading to waves of technological and institutional change that gain momentum more slowly. The advent of the packet-switched communication protocol TCP/IP—originally introduced in 1972 to support a single use case for United States Department of Defense electronic communication (email), and which gained widespread adoption only in the mid-1990s with the advent of the World Wide Web—is a foundational technology.]

All organizations can innovate, including for example hospitals, universities, and local governments. For instance, former Mayor Martin O’Malley pushed the City of Baltimore to use CitiStat, a performance-measurement data and management system that allows city officials to maintain statistics on several areas from crime trends to the conditions of potholes. This system aids in better evaluation of policies and procedures with accountability and efficiency in terms of time and money. In its first year, CitiStat saved the city $13.2 million. Even mass transit systems have innovated with hybrid bus fleets to real-time tracking at bus stands. In addition, the growing use of mobile data terminals in vehicles, that serve as communication hubs between vehicles and a control center, automatically send data on location, passenger counts, engine performance, mileage and other information. This tool helps to deliver and manage transportation systems.

Still other innovative strategies include hospitals digitizing medical information in electronic medical records. For example, the U.S. Department of Housing and Urban Development's HOPE VI initiatives turned severely distressed public housing in urban areas into revitalized, mixed-income environments; the Harlem Children’s Zone used a community-based approach to educate local area children; and the Environmental Protection Agency's brownfield grants facilitates turning over brownfields for environmental protection, green spaces, community and commercial development.

Hasmath et al. have found that within local government organizations in China, the appetite to innovate may be linked to specific character types. They identify three distinct character types within the Chinese local government: authoritarian bureaucratic, a primarily older male cadre who are most likely to follow central government command; a consultative governance types that is most open to collaborating with NGOs and outside of government, and; an entrepreneurial type that is both less risk averse and demonstrates high personal efficacy.

2.7.2.3 Sources of Innovation

There are several sources of innovation. It can occur as a result of a focus effort by a range of different agents, by chance, or as a result of a major system failure.

According to Peter F. Drucker, the general sources of innovations are different changes in industry structure, in market structure, in local and global demographics, in human perception, mood and meaning, in the amount of already available scientific knowledge, etc.

In the simplest linear model of innovation the traditionally recognized source is manufacturer innovation. This is where an agent (person or business) innovates in order to sell the innovation. Specifically, R&D measurement is the commonly used input for innovation, in particular in the business sector, named Business Expenditure on R&D (BERD) that grew over the years on the expenses of the declining R&D invested by the public sector.

Another source of innovation, only now becoming widely recognized, is end-user innovation. This is where an agent (person or company) develops an innovation for their own (personal or in-house) use because existing products do not meet their needs. MIT economist Eric von Hippel has identified end-user innovation as, by far, the most important and critical in his classic book on the subject, "The Sources of Innovation".

The robotics engineer Joseph F. Engelberger asserts that innovations require only three things:

  1. a recognized need
  2. competent people with relevant technology
  3. financial support

However, innovation processes usually involve: identifying customer needs, macro and meso trends, developing competences, and finding financial support.

The Kline chain-linked model of innovation places emphasis on potential market needs as drivers of the innovation process, and describes the complex and often iterative feedback loops between marketing, design, manufacturing, and R&D.

Innovation by businesses is achieved in many ways, with much attention now given to formal research and development (R&D) for "breakthrough innovations". R&D help spur on patents and other scientific innovations that leads to productive growth in such areas as industry, medicine, engineering, and government. Yet, innovations can be developed by less formal on-the-job modifications of practice, through exchange and combination of professional experience and by many other routes. Investigation of relationship between the concepts of innovation and technology transfer revealed overlap. The more radical and revolutionary innovations tend to emerge from R&D, while more incremental innovations may emerge from practice – but there are many exceptions to each of these trends.

Information technology and changing business processes and management style can produce a work climate favorable to innovation. For example, the software tool company Atlassian conducts quarterly "ShipIt Days" in which employees may work on anything related to the company's products. Google employees work on self-directed projects for 20% of their time (known as Innovation Time Off). Both companies cite these bottom-up processes as major sources for new products and features.

An important innovation factor includes customers buying products or using services. As a result, organizations may incorporate users in focus groups (user centred approach), work closely with so called lead users (lead user approach) or users might adapt their products themselves. The lead user method focuses on idea generation based on leading users to develop breakthrough innovations. U-STIR, a project to innovate Europe’s surface transportation system, employs such workshops. Regarding this user innovation, a great deal of innovation is done by those actually implementing and using technologies and products as part of their normal activities. Sometimes user-innovators may become entrepreneurs, selling their product, they may choose to trade their innovation in exchange for other innovations, or they may be adopted by their suppliers. Nowadays, they may also choose to freely reveal their innovations, using methods like open source. In such networks of innovation the users or communities of users can further develop technologies and reinvent their social meaning.

One technique for innovating a solution to an identified problem is to actually attempt an experiment with many possible solutions. This technique was famously used by Thomas Edison's laboratory to find a version of the incandescent light bulb economically viable for home use, which involved searching through thousands of possible filament designs before settling on carbonized bamboo.

This technique is sometimes used in pharmaceutical drug discovery. Thousands of chemical compounds are subjected to high-throughput screening to see if they have any activity against a target molecule which has been identified as biologically significant to a disease. Promising compounds can then be studied; modified to improve efficacy, reduce side effects, and reduce cost of manufacture; and if successful turned into treatments.

The related technique of A/B testing is often used to help optimize the design of web sites and mobile apps. This is used by major sites such as amazon.com, Facebook, Google, and Netflix. Procter & Gamble uses computer-simulated products and online user panels to conduct larger numbers of experiments to guide the design, packaging, and shelf placement of consumer products. Capital One uses this technique to drive credit card marketing offers.
 

2.7.2.4 Goals and Failures of Innovation

Programs of organizational innovation are typically tightly linked to organizational goals and objectives, to the business plan, and to market competitive positioning. One driver for innovation programs in corporations is to achieve growth objectives. As Davila et al. (2006) notes, "Companies cannot grow through cost reduction and reengineering alone... Innovation is the key element in providing aggressive top-line growth, and for increasing bottom-line results".

One survey across a large number of manufacturing and services organizations found, ranked in decreasing order of popularity, that systematic programs of organizational innovation are most frequently driven by: improved quality, creation of new markets, extension of the product range, reduced labor costs, improved production processes, reduced materials, reduced environmental damage, replacement of products/services, reduced energy consumption, conformance to regulations.

These goals vary between improvements to products, processes and services and dispel a popular myth that innovation deals mainly with new product development. Most of the goals could apply to any organization be it a manufacturing facility, marketing company, hospital or government. Whether innovation goals are successfully achieved or otherwise depends greatly on the environment prevailing in the organization.

Conversely, failure can develop in programs of innovations. The causes of failure have been widely researched and can vary considerably. Some causes will be external to the organization and outside its influence of control. Others will be internal and ultimately within the control of the organization. Internal causes of failure can be divided into causes associated with the cultural infrastructure and causes associated with the innovation process itself. Common causes of failure within the innovation process in most organizations can be distilled into five types: poor goal definition, poor alignment of actions to goals, poor participation in teams, poor monitoring of results, poor communication and access to information.
 

2.7.3 Diffusion of Innovation

Diffusion of innovation research was first started in 1903 by seminal researcher Gabriel Tarde, who first plotted the S-shaped diffusion curve. Tarde defined the innovation-decision process as a series of steps that include:

  1. knowledge
  2. forming an attitude
  3. a decision to adopt or reject
  4. implementation and use
  5. confirmation of the decision

Once innovation occurs, innovations may be spread from the innovator to other individuals and groups. This process has been proposed that the lifecycle of innovations can be described using the 's-curve' or diffusion curve. The s-curve maps growth of revenue or productivity against time. In the early stage of a particular innovation, growth is relatively slow as the new product establishes itself. At some point, customers begin to demand and the product growth increases more rapidly. New incremental innovations or changes to the product allow growth to continue. Towards the end of its lifecycle, growth slows and may even begin to decline. In the later stages, no amount of new investment in that product will yield a normal rate of return

The s-curve derives from an assumption that new products are likely to have "product life" – ie, a start-up phase, a rapid increase in revenue and eventual decline. In fact, the great majority of innovations never get off the bottom of the curve, and never produce normal returns.

Innovative companies will typically be working on new innovations that will eventually replace older ones. Successive s-curves will come along to replace older ones and continue to drive growth upwards. In the figure above the first curve shows a current technology. The second shows an emerging technology that currently yields lower growth but will eventually overtake current technology and lead to even greater levels of growth. The length of life will depend on many factors.

2.7.4 Measures of Innovation

Measuring innovation is inherently difficult as it implies commensurability so that comparisons can be made in quantitative terms. Innovation, however, is by definition novelty. Comparisons are thus often meaningless across products or service. Nevertheless, Edison et al. in their review of literature on innovation management found 232 innovation metrics. They categorized these measures along five dimensions; ie inputs to the innovation process, output from the innovation process, effect of the innovation output, measures to access the activities in an innovation process and availability of factors that facilitate such a process.

There are two different types of measures for innovation: the organizational level and the political level. 

2.7.4.1 Organizational level

The measure of innovation at the organizational level relates to individuals, team-level assessments, and private companies from the smallest to the largest company. Measure of innovation for organizations can be conducted by surveys, workshops, consultants, or internal benchmarking. There is today no established general way to measure organizational innovation. Corporate measurements are generally structured around balanced scorecards which cover several aspects of innovation such as business measures related to finances, innovation process efficiency, employees' contribution and motivation, as well benefits for customers. Measured values will vary widely between businesses, covering for example new product revenue, spending in R&D, time to market, customer and employee perception & satisfaction, number of patents, additional sales resulting from past innovations.

2.7.4.2 Political Level

For the political level, measures of innovation are more focused on a country or region competitive advantage through innovation. In this context, organizational capabilities can be evaluated through various evaluation frameworks, such as those of the European Foundation for Quality Management. The OECD Oslo Manual (1992) suggests standard guidelines on measuring technological product and process innovation. Some people consider the Oslo Manual complementary to the Frascati Manual from 1963. The new Oslo Manual from 2018 takes a wider perspective to innovation, and includes marketing and organizational innovation. These standards are used for example in the European Community Innovation Surveys.

Other ways of measuring innovation have traditionally been expenditure, for example, investment in R&D (Research and Development) as percentage of GNP (Gross National Product). Whether this is a good measurement of innovation has been widely discussed and the Oslo Manual has incorporated some of the critique against earlier methods of measuring. The traditional methods of measuring still inform many policy decisions. The EU Lisbon Strategy has set as a goal that their average expenditure on R&D should be 3% of GDP.

2.7.4.3 Indicators of Innovations

Many scholars claim that there is a great bias towards the "science and technology mode" (S&T-mode or STI-mode), while the "learning by doing, using and interacting mode" (DUI-mode) is ignored and measurements and research about it rarely done. For example, an institution may be high tech with the latest equipment, but lacks crucial doing, using and interacting tasks important for innovation.

A common industry view (unsupported by empirical evidence) is that comparative cost-effectiveness research is a form of price control which reduces returns to industry, and thus limits R&D expenditure, stifles future innovation and compromises new products access to markets.[55] Some academics claim cost-effectiveness research is a valuable value-based measure of innovation which accords "truly significant" therapeutic advances (ie providing "health gain") higher prices than free market mechanisms. Such value-based pricing has been viewed as a means of indicating to industry the type of innovation that should be rewarded from the public purse.

An Australian academic developed the case that national comparative cost-effectiveness analysis systems should be viewed as measuring "health innovation" as an evidence-based policy concept for valuing innovation distinct from valuing through competitive markets, a method which requires strong anti-trust laws to be effective, on the basis that both methods of assessing pharmaceutical innovations are mentioned in annex 2C.1 of the Australia-United States Free Trade Agreement.

2.7.4.4 Indices of Innovation

Several indices attempt to measure innovation and rank entities based on these measures, such as:

  • Bloomberg Innovation Index
  • "Bogota Manual" similar to the Oslo Manual, is focused on Latin America and the Caribbean countries.
  • "Creative Class" developed by Richard Florida
  • EIU Innovation Ranking
  • Global Competitiveness Report
  • Global Innovation Index (GII), by INSEAD
  • Information Technology and Innovation Foundation (ITIF) Index
  • Innovation 360 – From the World Bank. Aggregates innovation indicators (and more) from a number of different public sources
  • Innovation Capacity Index (ICI) published by a large number of international professors working in a collaborative fashion. The top scorers of ICI 2009–2010 were: 1. Sweden 82.2; 2. Finland 77.8; and 3. United States 77.5
  • Innovation Index, developed by the Indiana Business Research Center, to measure innovation capacity at the county or regional level in the United States
  • Innovation Union Scoreboard
  • innovationsindikator for Germany, developed by the Federation of German Industries (Bundesverband der Deutschen Industrie) in 2005
  • INSEAD Innovation Efficacy Index
  • International Innovation Index, produced jointly by The Boston Consulting Group, the National Association of Manufacturers (NAM) and its nonpartisan research affiliate The Manufacturing Institute, is a worldwide index measuring the level of innovation in a country; NAM describes it as the "largest and most comprehensive global index of its kind"
  • Management Innovation Index – Model for Managing Intangibility of Organizational Creativity: Management Innovation Index
  • NYCEDC Innovation Index, by the New York City Economic Development Corporation, tracks New York City's "transformation into a center for high-tech innovation. It measures innovation in the City's growing science and technology industries and is designed to capture the effect of innovation on the City's economy"
  • OECD Oslo Manual is focused on North America, Europe, and other rich economies
  • State Technology and Science Index, developed by the Milken Institute, is a U.S.-wide benchmark to measure the science and technology capabilities that furnish high paying jobs based around key components
  • World Competitiveness Scoreboard
2.7.4.5 Rankings of Innovation

Many research studies try to rank countries based on measures of innovation. Common areas of focus include: high-tech companies, manufacturing, patents, post secondary education, research and development, and research personnel. The left ranking of the top 10 countries below is based on the 2016 Bloomberg Innovation Index. However, studies may vary widely; for example the Global Innovation Index 2016 ranks Switzerland as number one wherein countries like South Korea and Japan do not even make the top ten.
 

2.7.4.6 Future of Innovation

In 2005 Jonathan Huebner, a physicist working at the Pentagon's Naval Air Warfare Center, argued on the basis of both U.S. patents and world technological breakthroughs, per capita, that the rate of human technological innovation peaked in 1873 and has been slowing ever since. In his article, he asked "Will the level of technology reach a maximum and then decline as in the Dark Ages?" In later comments to New Scientist magazine, Huebner clarified that while he believed that we will reach a rate of innovation in 2024 equivalent to that of the Dark Ages, he was not predicting the reoccurrence of the Dark Ages themselves.

John Smart criticized the claim and asserted that technological singularity researcher Ray Kurzweil and others showed a "clear trend of acceleration, not deceleration" when it came to innovations. The foundation replied to Huebner the journal his article was published in, citing Second Life and eHarmony as proof of accelerating innovation; to which Huebner replied. However, Huebner's findings were confirmed in 2010 with U.S. Patent Office data. and in a 2012 paper.

2.7.4.7 Innovation and Development

The theme of innovation as a tool to disrupting patterns of poverty has gained momentum since the mid-2000s among major international development actors such as DFID, Gates Foundation's use of the Grand Challenge funding model, and USAID's Global Development Lab. Networks have been established to support innovation in development, such as D-Lab at MIT. Investment funds have been established to identify and catalyze innovations in developing countries, such as DFID's Global Innovation Fund, Human Development Innovation Fund, and (in partnership with USAID) the Global Development Innovation Ventures.
 

2.7.5 Government Policies

Given the noticeable effects on efficiency, quality of life, and productive growth, innovation is a key factor in society and economy. Consequently, policymakers have long worked to develop environments that will foster innovation and its resulting positive benefits, from funding Research and Development to supporting regulatory change, funding the development of innovation clusters, and using public purchasing and standardisation to 'pull' innovation through.

For instance, experts are advocating that the U.S. federal government launch a National Infrastructure Foundation, a nimble, collaborative strategic intervention organization that will house innovations programs from fragmented silos under one entity, inform federal officials on innovation performance metrics, strengthen industry-university partnerships, and support innovation economic development initiatives, especially to strengthen regional clusters. Because clusters are the geographic incubators of innovative products and processes, a cluster development grant program would also be targeted for implementation. By focusing on innovating in such areas as precision manufacturing, information technology, and clean energy, other areas of national concern would be tackled including government debt, carbon footprint, and oil dependence. The U.S. Economic Development Administration understand this reality in their continued Regional Innovation Clusters initiative.[87] In addition, federal grants in R&D, a crucial driver of innovation and productive growth, should be expanded to levels similar to Japan, Finland, South Korea, and Switzerland in order to stay globally competitive. Also, such grants should be better procured to metropolitan areas, the essential engines of the American economy.

Many countries recognize the importance of research and development as well as innovation including Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT); Germany's Federal Ministry of Education and Research; and the Ministry of Science and Technology in the People's Republic of China. Furthermore, Russia's innovation programme is the Medvedev modernisation programme which aims at creating a diversified economy based on high technology and innovation. Also, the Government of Western Australia has established a number of innovation incentives for government departments. Landgate was the first Western Australian government agency to establish its Innovation Program.

Regions have taken a more proactive role in supporting innovation. Many regional governments are setting up regional innovation agency to strengthen regional innovation capabilities. In Medellin, Colombia, the municipality of Medellin created in 2009 Ruta N to transform the city into a knowledge city.
 

2.8 Data-driven Business

There's a lot of talk among both technology vendor communities and high-paid consultants about the need to make your business “data-driven.” While it's easy to talk about being data-driven, there are some key questions you might want to ask to get clarity from a practical perspective:

  • What does it really mean to be data-driven?
  • How does the concept translate into pragmatic guidance that businesses can apply?
  • Can a conventional organization transform into a data-driven business?

I believe that there are different ways organizations can be driven by data. Specifically, there are those that are completely data-driven, others that use data to drive a more conventional business, and still others that use data to enhance or optimize their business. In this post, I’ll examine these different classifications. In a later post, we'll consider what a conventional business can learn from these different classes of data-driven businesses to increase their competitiveness in our information-rich world.
 

2.8.1 Fully Data-driven Businesses

An organization that is completely data-driven competes solely on the basis of transforming information into a monetizable asset. These businesses effectively create revenue streams through their platforms to adapt information sharing into a means for value exchange. Consequently they benefit by extracting some kind of fee layered on top of the basic costs of a transaction.

A prime example is Airbnb, which is a two-sided marketplace for customers (individuals looking for temporary housing) and goods/services providers (people with space available for short-term rentals). The company basically partners with the providers, collects their information and makes that information available to the customer pool. A customer selects a provider, executes the transaction, and then a negotiated commission is transmitted back to the company. The company, however, does not own any of the assets; the assets are owned and managed by the providers. Uber and ebay are similar companies. Other information companies – like Craigslist and Angie’s list – either charge providers to post a listing about their products/services, or charge a subscription fee to customers. And of course, companies like Facebook and Linkedin use data to drive advertising revenue.

2.8.2 Data-infused Businesses

A different class of data-driven business, which I call a data-infused business, manages and sells from an inventory of products (and possibly services) and manages the end-to-end sales process, but uses information to drive marketing to increase sales. An obvious example of a company like this is Amazon. Amazon has developed a website driven by recommendation engines that make product recommendations, and its massive supply chains are driven by predictive analytics. It relies on the sharing economy (like Uber) by outsourcing delivery to independent drivers who use a similar marketplace application. Another example is Netflix, which transacts with content providers (like movie studios) for material that can be streamed – and it charges a monthly fee to customers to generate revenue. While Netflix doesn't own the content, it uses its platform to broker content delivery.

2.8.3 Data-informed Businesses

The third class of data-driven business, which I call a “data-informed” business, includes more conventional companies that are adapting data technologies to fit their existing business models. One example is John Deere, an equipment manufacturer that's embracing Internet of Things (IoT) devices and embedding them into its newest models of equipment. We can presume that adding IoT devices to this equipment helps generate data that can be used to monitor equipment performance, help the purchaser maintain the equipment, identify opportunities for improving designs, and find the best ways to market products to customers.

There are clear differences between the ways businesses can be driven by data:

  • Completely data-driven businesses create revenue out of data.
  • Data-infused businesses disrupt their markets as they use data to significantly boost efficiency and gain advantages over competitors in the industry.
  • Data-informed businesses recognize that there's a need for improved data management so they can continue to be competitive.

 

2.9 Cyber Security

Computer security, cybersecurity or information technology security is the protection of computer systems from theft or damage to their hardware, software or electronic data, as well as from disruption or misdirection of the services they provide.

Computer security, cybersecurity or information technology security (IT security) is the protection of computer systems from theft or damage to their hardware, software or electronic data, as well as from disruption or misdirection of the services they provide.

The field is growing in importance due to increasing reliance on computer systems, the Internet and wireless networks such as Bluetooth and Wi-Fi, and due to the growth of "smart" devices, including smartphones, televisions and the various tiny devices that constitute the Internet of things. Due to its complexity, both in terms of politics and technology, it is also one of the major challenges of the contemporary world.
 

2.9.1 Vulnerabilities and attacks

A vulnerability is a weakness in design, implementation, operation or internal control. Most of the vulnerabilities that have been discovered are documented in the Common Vulnerabilities and Exposures (CVE) database.

An exploitable vulnerability is one for which at least one working attack or "exploit" exists. Vulnerabilities are often hunted or exploited with the aid of automated tools or manually using customized scripts.

To secure a computer system, it is important to understand the attacks that can be made against it, and these threats can typically be classified into one of these categories below: 

2.9.1.1 Backdoor

A backdoor in a computer system, a cryptosystem or an algorithm, is any secret method of bypassing normal authentication or security controls. They may exist for a number of reasons, including by original design or from poor configuration. They may have been added by an authorized party to allow some legitimate access, or by an attacker for malicious reasons; but regardless of the motives for their existence, they create a vulnerability. 

2.9.1.2 Denial-of-service attacks

Denial of service attacks (DoS) are designed to make a machine or network resource unavailable to its intended users.[5] Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victims account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a single IP address can be blocked by adding a new firewall rule, many forms of Distributed denial of service (DDoS) attacks are possible, where the attack comes from a large number of points – and defending is much more difficult. Such attacks can originate from the zombie computers of a botnet, but a range of other techniques are possible including reflection and amplification attacks, where innocent systems are fooled into sending traffic to the victim. 

2.9.1.3 Direct-access attacks

An unauthorized user gaining physical access to a computer is most likely able to directly copy data from it. They may also compromise security by making operating system modifications, installing software worms, keyloggers, covert listening devices or using wireless mice.[6] Even when the system is protected by standard security measures, these may be able to be by-passed by booting another operating system or tool from a CD-ROM or other bootable media. Disk encryption and Trusted Platform Module are designed to prevent these attacks.

2.9.1.4 Eavesdropping

Eavesdropping is the act of surreptitiously listening to a private conversation, typically between hosts on a network. For instance, programs such as Carnivore and NarusInSight have been used by the FBI and NSA to eavesdrop on the systems of internet service providers. Even machines that operate as a closed system (i.e., with no contact to the outside world) can be eavesdropped upon via monitoring the faint electromagnetic transmissions generated by the hardware; TEMPEST is a specification by the NSA referring to these attacks.

2.9.1.5 Multi-vector, polymorphic attacks

Surfacing in 2017, a new class of multi-vector, polymorphic[8] cyber threats surfaced that combined several types of attacks and changed form to avoid cyber security controls as they spread. These threats have been classified as fifth generation cyber attacks.

2.9.1.6 Phishing

Phishing is the attempt to acquire sensitive information such as usernames, passwords, and credit card details directly from users. Phishing is typically carried out by email spoofing or instant messaging, and it often directs users to enter details at a fake website whose look and feel are almost identical to the legitimate one. The fake website often ask for personal information, such as log-in and passwords. This information can then be used to gain access to the individual's real account on the real website. Preying on a victim's trust, phishing can be classified as a form of social engineering.

2.9.1.6 Privilege escalation

Privilege escalation describes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level. For example, a standard computer user may be able to fool the system into giving them access to restricted data; or even become "root" and have full unrestricted access to a system. 

2.9.1.6 Social engineering

Social engineering aims to convince a user to disclose secrets such as passwords, card numbers, etc. by, for example, impersonating a bank, a contractor, or a customer.

A common scam involves fake CEO emails sent to accounting and finance departments. In early 2016, the FBI reported that the scam has cost US businesses more than $2bn in about two years.

In May 2016, the Milwaukee Bucks NBA team was the victim of this type of cyber scam with a perpetrator impersonating the team's president Peter Feigin, resulting in the handover of all the team's employees' 2015 W-2 tax forms.

2.9.1.7 Spoofing

Spoofing is the act of masquerading as a valid entity through falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. There are several types of spoofing, including:

  • Email spoofing, where an attacker forges the sending (From, or source) address of an email.
  • IP address spoofing, where an attacker alters the source IP address in a network packet to hide their identity or impersonate another computing system.
  • MAC spoofing, where an attacker modifies the Media Access Control (MAC) address of their network interface to pose as a valid user on a network.
  • Biometric spoofing, where an attacker produces a fake biometric sample to pose as another user.Tampering
2.9.1.8 Tampering

Tampering describes a malicious modification of products. So-called "Evil Maid" attacks and security services planting of surveillance capability into routers are examples.

 

2.9.2 Information security culture

Employee behavior can have a big impact on information security in organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. ″Exploring the Relationship between Organizational Culture and Information Security Culture″ provides the following definition of information security culture: ″ISC is the totality of patterns of behavior in an organization that contribute to the protection of information of all kinds.″

Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security "effort" and often take actions that ignore organizational Information Security best interests. Research shows Information security culture needs to be improved continuously. In ″Information Security Culture from Analysis to Change″, authors commented, ″It's a never ending process, a cycle of evaluation and change or maintenance.″ To manage the information security culture, five steps should be taken: Pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.

  • Pre-Evaluation: to identify the awareness of information security within employees and to analyze the current security policy.
  • Strategic Planning: to come up with a better awareness program, clear targets need to be set. Clustering[definition needed] people is helpful to achieve it.
  • Operative Planning: a good security culture can be established based on internal communication, management-buy-in, and security awareness and a training program.
  • Implementation: four stages should be used to implement the information security culture. They are:
  1. Commitment of the management
  2. Communication with organizational members
  3. Courses for all organizational members
  4. Commitment of the employees

 

2.9.3 Systems at Risk

The growth in the number of computer systems, and the increasing reliance upon them of individuals, businesses, industries and governments means that there are an increasing number of systems at risk. 

2.9.3.1 Financial Systems

The computer systems of financial regulators and financial institutions like the U.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets for cyber criminals interested in manipulating markets and making illicit gains. Web sites and apps that accept or store credit card numbers, brokerage accounts, and bank account information are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on the black market. In-store payment systems and ATMs have also been tampered with in order to gather customer account data and PINs.

2.9.3.2 Utilities and industrial equipment

Computers control functions at many utilities, including coordination of telecommunications, the power grid, nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but the Stuxnet worm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, the Computer Emergency Readiness Team, a division of the Department of Homeland Security, investigated 79 hacking incidents at energy companies. Vulnerabilities in smart meters (many of which use local radio or cellular communications) can cause problems with billing fraud.

2.9.3.3 Aviation

The aviation industry is very reliant on a series of complex systems which could be attacked. A simple power outage at one airport can cause repercussions worldwide, much of the system relies on radio transmissions which could be disrupted, and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. There is also potential for attack from within an aircraft.

In Europe, with the (Pan-European Network Service) and NewPENS, and in the US with the NextGen program, air navigation service providers are moving to create their own dedicated networks.

The consequences of a successful attack range from loss of confidentiality to loss of system integrity, air traffic control outages, loss of aircraft, and even loss of life. 

2.9.3.4 Consumer Devices

Desktop computers and laptops are commonly targeted to gather passwords or financial account information, or to construct a botnet to attack another target. Smartphones, tablet computers, smart watches, and other mobile devices such as quantified self devices like activity trackers have sensors such as cameras, microphones, GPS receivers, compasses, and accelerometers which could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.

The increasing number of home automation devices such as the Nest thermostat are also potential targets.

2.9.3.5 Large Corporations

Large corporations are common targets. In many cases this is aimed at financial gain through identity theft and involves data breaches such as the loss of millions of clients' credit card details by Home Depot, Staples, Target Corporation, and the most recent breach of Equifax.]

Some cyberattacks are ordered by foreign governments, these governments engage in cyberwarfare with the intent to spread their propaganda, sabotage, or spy on their targets. Many people believe the Russian government played a major role in the US presidential election of 2016 by using Twitter and Facebook to affect the results of the election.

Medical records have been targeted for use in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale. Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015.

Not all attacks are financially motivated however; for example security firm HBGary Federal suffered a serious series of attacks in 2011 from hacktivist group Anonymous in retaliation for the firm's CEO claiming to have infiltrated their group, and in the Sony Pictures attack of 2014 the motive appears to have been to embarrass with data leaks, and cripple the company by wiping workstations and servers.

2.9.3.6 Automobiles

Vehicles are increasingly computerized, with engine timing, cruise control, anti-lock brakes, seat belt tensioners, door locks, airbags and advanced driver-assistance systems on many models. Additionally, connected cars may use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network. Self-driving cars are expected to be even more complex.

All of these systems carry some security risk, and such issues have gained wide attention. Simple examples of risk include a malicious compact disc being used as an attack vector, and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internal controller area network, the danger is much greater – and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch.

Manufacturers are reacting in a number of ways, with Tesla in 2016 pushing out some security fixes "over the air" into its cars' computer systems.

In the area of autonomous vehicles, in September 2016 the United States Department of Transportation announced some initial safety standards, and called for states to come up with uniform policies.

2.9.3.7 Government

Government and military computer systems are commonly attacked by activists and foreign powers. Local and regional government infrastructure such as traffic light controls, police and intelligence agency communications, personnel records, student records, and financial systems are also potential targets as they are now all largely computerized. Passports and government ID cards that control access to facilities which use RFID can be vulnerable to cloning. 

2.9.3.8 Internet of Things (IoT) and Physical Vulnerabilities

The Internet of things (IoT) is the network of physical objects such as devices, vehicles, and buildings that are embedded with electronics, software, sensors, and network connectivity that enables them to collect and exchange data – and concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.

While the IoT creates opportunities for more direct integration of the physical world into computer-based systems, it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyber attacks are likely to become an increasingly physical (rather than simply virtual) threat. If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.

2.9.3.9 Medical Systems

Medical devices have either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment and implanted devices including pacemakers and insulin pumps. There are many reports of hospitals and hospital organizations getting hacked, including ransomware attacks, Windows XP exploits, viruses, and data breaches of sensitive data stored on hospital servers. On 28 December 2016 the US Food and Drug Administration released its recommendations for how medical device manufacturers should maintain the security of Internet-connected devices – but no structure for enforcement

2.9.3.10 Energy Sector

In distributed generation systems, the risk of cyber attacks is real, according to Daily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility, Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyberattackers to threaten the electric grid."

 

2.9.4 Impact of security breaches

Serious financial damage has been caused by security breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable to virus and worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."[90] Security breaches continue to cost businesses billions of dollars but a survey revealed that 66% of security staffs do not believe senior leadership takes cyber precautions as a strategic priority.[40][third-party source needed]

However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classic Gordon-Loeb Model analyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber/information security breach).

 

2.9.5 Attacker Motivation

As with physical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers or vandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced, but started with amateurs such as Markus Hess who hacked for the KGB, as recounted by Clifford Stoll in The Cuckoo's Egg.

Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas.[citation needed] The growth of the internet, mobile technologies and inexpensive computing devices that has led to a rise in capabilities but also risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these type of actors. Several stark differences exist between the hacker motivation and that of nation state actors seeking to attack based an ideological preference.

A standard part of threat modelling for any particular system is to identify what might motivate an attack on that system, and who might be motivated to breach it. The level and detail of precautions will vary depending on the system to be secured. A home personal computer, bank, and classified military network face very different threats, even when the underlying technologies in use are similar.

2.9.6 Computer Protection (CounterMeasures)

In computer security a countermeasure is an action, device, procedure, or technique that reduces a threat, a vulnerability, or an attack by eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.

Some common countermeasures are listed in the following sections: 

2.9.6.1 Security by Design

Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered as a main feature.

Some of the techniques in this approach include:

  • The principle of least privilege, where each part of the system has only the privileges that are needed for its function. That way even if an attacker gains access to that part, they have only limited access to the whole system.
  • Automated theorem proving to prove the correctness of crucial software subsystems.
  • Code reviews and unit testing, approaches to make modules more secure where formal correctness proofs are not possible.
  • Defense in depth, where the design is such that more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds.
  • Default secure settings, and design to "fail secure" rather than "fail insecure" (see fail-safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.
  • Audit trails tracking system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks.
  • Full disclosure of all vulnerabilities, to ensure that the "window of vulnerability" is kept as short as possible when bugs are discovered.
2.9.6.2 Security Architecture

The Open Security Architecture organization defines IT security architecture as "the design artifacts that describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve the purpose to maintain the system's quality attributes: confidentiality, integrity, availability, accountability and assurance services".

Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:

  • the relationship of different components and how they depend on each other.
  • the determination of controls based on risk assessment, good practice, finances, and legal matters.
  • the standardization of controls.
2.9.6.3 Security Measures

A state of computer "security" is the conceptual ideal, attained by the use of the three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following:

  • User account access controls and cryptography can protect systems files and data, respectively.
  • Firewalls are by far the most common prevention systems from a network security perspective as they can (if properly configured) shield access to internal network services, and block certain kinds of attacks through packet filtering. Firewalls can be both hardware- or software-based.
  • Intrusion Detection System (IDS) products are designed to detect network attacks in-progress and assist in post-attack forensics, while audit trails and logs serve a similar function for individual systems.
  • "Response" is necessarily defined by the assessed security requirements of an individual system and may cover the range from simple upgrade of protections to notification of legal authorities, counter-attacks, and the like. In some special cases, a complete destruction of the compromised system is favored, as it may happen that not all the compromised resources are detected.

Today, computer security comprises mainly "preventive" measures, like firewalls or an exit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as the Internet, and can be implemented as software running on the machine, hooking into the network stack (or, in the case of most UNIX-based operating systems such as Linux, built into the operating system kernel) to provide real-time filtering and blocking. Another implementation is a so-called "physical firewall", which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet.

Some organizations are turning to big data platforms, such as Apache Hadoop, to extend data accessibility and machine learning to detect advanced persistent threats.

However, relatively few organisations maintain computer systems with effective detection systems, and fewer still have organized response mechanisms in place. As a result, as Reuters points out: "Companies for the first time report they are losing more through electronic theft of data than physical stealing of assets". The primary obstacle to effective eradication of cyber crime could be traced to excessive reliance on firewalls and other automated "detection" systems. Yet it is basic evidence gathering by using packet capture appliances that puts criminals behind bars.

2.9.6.4 Vulnerability Management

Vulnerability management is the cycle of identifying, and remediating or mitigating vulnerabilities, especially in software and firmware. Vulnerability management is integral to computer security and network security.

Vulnerabilities can be discovered with a vulnerability scanner, which analyzes a computer system in search of known vulnerabilities, such as open ports, insecure software configuration, and susceptibility to malware.

Beyond vulnerability scanning, many organizations contract outside security auditors to run regular penetration tests against their systems to identify vulnerabilities. In some sectors, this is a contractual requirement.

2.9.6.5 Reducing Vulnerabilities

While formal verification of the correctness of computer systems is possible, it is not yet common. Operating systems formally verified include seL4, and SYSGO's PikeOS – but these make up a very small percentage of the market.

Two factor authentication is a method for mitigating unauthorized access to a system or sensitive information. It requires "something you know"; a password or PIN, and "something you have"; a card, dongle, cellphone, or other piece of hardware. This increases security as an unauthorized person needs both of these to gain access.

Social engineering and direct computer access (physical) attacks can only be prevented by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk, but even in a highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent.

Enoculation, derived from inoculation theory, seeks to prevent social engineering and other fraudulent tricks or traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.

It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates, using a security scanner[definition needed] or/and hiring competent people responsible for security.(This statement is ambiguous. Even systems developed by "competent" people get penetrated) The effects of data loss/damage can be reduced by careful backing up and insurance. 

2.9.6.6 Hardware Protection Mechanisms

While hardware may be a source of insecurity, such as with microchip vulnerabilities maliciously introduced during the manufacturing process, hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such as dongles, trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below.

  • USB dongles are typically used in software licensing schemes to unlock software capabilities, but they can also be seen as a way to prevent unauthorized access to a computer or other device's software. The dongle, or key, essentially creates a secure encrypted tunnel between the software application and the key. The principle is that an encryption scheme on the dongle, such as Advanced Encryption Standard (AES) provides a stronger measure of security, since it is harder to hack and replicate the dongle than to simply copy the native software to another machine and use it. Another security application for dongles is to use them for accessing web-based content such as cloud software or Virtual Private Networks (VPNs). In addition, a USB dongle can be configured to lock or unlock a computer.
  • Trusted platform modules (TPMs) secure devices by integrating cryptographic capabilities onto access devices, through the use of microprocessors, or so-called computers-on-a-chip. TPMs used in conjunction with server-side software offer a way to detect and authenticate hardware devices, preventing unauthorized network and data access.
  • Computer case intrusion detection refers to a device, typically a push-button switch, which detects when a computer case is opened. The firmware or BIOS is programmed to show an alert to the operator when the computer is booted up the next time.
  • Drive locks are essentially software tools to encrypt hard drives, making them inaccessible to thieves. Tools exist specifically for encrypting external drives as well.
  • Disabling USB ports is a security option for preventing unauthorized and malicious access to an otherwise secure computer. Infected USB dongles connected to a network from a computer inside the firewall are considered by the magazine Network World as the most common hardware threat facing computer networks.
  • Disconnecting or disabling peripheral devices ( like camera, GPS, removable storage etc.), that are not in use.
  • Mobile-enabled access devices are growing in popularity due to the ubiquitous nature of cell phones. Built-in capabilities such as Bluetooth, the newer Bluetooth low energy (LE), Near field communication (NFC) on non-iOS devices and biometric validation such as thumb print readers, as well as QR code reader software designed for mobile devices, offer new, secure ways for mobile phones to connect to access control systems. These control systems provide computer security and can also be used for controlling access to secure buildings.
2.9.6.7 Secure Operating Systems

One use of the term "computer security" refers to technology that is used to implement secure operating systems. In the 1980s the United States Department of Defense (DoD) used the "Orange Book" standards, but the current international standard ISO/IEC 15408, "Common Criteria" defines a number of progressively more stringent Evaluation Assurance Levels. Many common operating systems meet the EAL4 standard of being "Methodically Designed, Tested and Reviewed", but the formal verification required for the highest levels means that they are uncommon. An example of an EAL6 ("Semiformally Verified Design and Tested") system is Integrity-178B, which is used in the Airbus A380 and several military jets.

2.9.6.8 Secure Coding

In software engineering, secure coding aims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems are "secure by design". Beyond this, formal verification aims to prove the correctness of the algorithms underlying a system; important for cryptographic protocols for example. 

2.9.6.9 Capabilities and Access Control Lists

Within computer systems, two of many security models capable of enforcing privilege separation are access control lists (ACLs) and capability-based security. Using ACLs to confine programs has been proven to be insecure in many situations, such as if the host computer can be tricked into indirectly allowing restricted file access, an issue known as the confused deputy problem. It has also been shown that the promise of ACLs of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.

Capabilities have been mostly restricted to research operating systems, while commercial OSs still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language. 

2.9.6.10 End User Security Training

The end-user is widely recognized as the weakest link in the security chain and it is estimated that more than 90% of security incidents and breaches involve some kind of human error. Among the most commonly recorded forms of errors and misjudgment are poor password management, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments.

As the human component of cyber risk is particularly relevant in determining the global cyber risk an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats.

The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers to develop a culture of cyber awareness within the organization, recognizing that a security aware user provides an important line of defense against cyber attacks. 

2.9.6.11 Response to Breaches

Responding forcefully to attempted security breaches (in the manner that one would for attempted physical security breaches) is often very difficult for a variety of reasons:

  • Identifying attackers is difficult, as they are often in a different jurisdiction to the systems they attempt to breach, and operate through proxies, temporary anonymous dial-up accounts, wireless connections, and other anonymizing procedures which make backtracing difficult and are often located in yet another jurisdiction. If they successfully breach security, they are often able to delete logs to cover their tracks.
  • The sheer number of attempted attacks is so large that organisations cannot spend time pursuing each attacker (a typical home user with a permanent (e.g., cable modem) connection will be attacked at least several times per day, so more attractive targets could be presumed to see many more). Note however, that most of the sheer bulk of these attacks are made by automated vulnerability scanners and computer worms.
  • Law enforcement officers are often unfamiliar with information technology, and so lack the skills and interest in pursuing attackers. There are also budgetary constraints. It has been argued that the high cost of technology, such as DNA testing, and improved forensics mean less money for other kinds of law enforcement, so the overall rate of criminals not getting dealt with goes up as the cost of the technology increases. In addition, the identification of attackers across a network may require logs from various points in the network and in many countries, the release of these records to law enforcement (with the exception of being voluntarily surrendered by a network administrator or a system administrator) requires a search warrant and, depending on the circumstances, the legal proceedings required can be drawn out to the point where the records are either regularly destroyed, or the information is no longer relevant.
  • The United States government spends the largest amount of money every year on cyber security. The United States has a yearly budget of 28 billion dollars. Canada has the 2nd highest annual budget at 1 billion dollars. Australia has the third highest budget with only 70 million dollars.
2.9.6.12 Types of security and privacy
  • Access control
  • Anti-keyloggers
  • Anti-malware
  • Anti-spyware
  • Anti-subversion software
  • Anti-tamper software
  • Anti-theft
  • Antivirus software
  • Cryptographic software
  • Computer-aided dispatch (CAD)
  • Firewall
  • Intrusion detection system (IDS)
  • Intrusion prevention system (IPS)
  • Log management software
  • Parental control
  • Records management
  • Sandbox
  • Security information management
  • SIEM
  • Software and operating system updating

 

2.9.7 Incident Response Planning

Incident response is an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion, typically escalates to a more impactful event such as a data breach or system failure. The intended outcome of a computer security incident response plan is to limit damage and reduce recovery time and costs. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize impact and losses.

Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organizations response and resolution.

There are four key components of a computer security incident response plan:

  1. Preparation: Preparing stakeholders on the procedures for handling computer security incidents or compromises
  2. Detection & Analysis: Identifying and investigating suspicious activity to confirm a security incident, prioritizing the response based on impact and coordinating notification of the incident
  3. Containment, Eradication & Recovery: Isolating affected systems to prevent escalation and limit impact, pinpointing the genesis of the incident, removing malware, affected systems and bad actors from the environment and restoring systems and data when a threat no longer remains
  4. Post Incident Activity: Post mortem analysis of the incident, its root cause and the organization's response with the intent of improving the incident response plan and future response efforts

 

2.9.8 Notable Attacks and Breaches

Some illustrative examples of different types of computer security breaches are given below. 

2.9.8.1 Robert Morris and the first computer worm

In 1988, only 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internet "computer worm". The software was traced back to 23-year-old Cornell University graduate student Robert Tappan Morris, Jr. who said "he wanted to count how many machines were connected to the Internet".

2.9.8.2 Rome Laboratory

In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.

2.9.8.3 TJX customer credit card details

In early 2007, American apparel and home goods company TJX announced that it was the victim of an unauthorized computer systems intrusion and that the hackers had accessed a system that stored data on credit card, debit card, check, and merchandise return transactions.

2.9.8.4 Stuxnet attack

In 2010 the computer worm known as Stuxnet reportedly ruined almost one-fifth of Iran's nuclear centrifuges. It did so by disrupting industrial programmable logic controllers (PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States – although neither has publicly admitted this.

2.9.8.5 Global surveillance disclosures

In early 2013, documents provided by Edward Snowden were published by The Washington Post and The Guardian exposing the massive scale of NSA global surveillance. There were also indications that the NSA may have inserted a backdoor in a NIST standard for encryption. This standard was later withdrawn due to widespread criticism. The NSA additionally were revealed to have tapped the links between Google's data centres.

2.9.8.6 Target and Home Depot breaches

In 2013 and 2014, a Russian/Ukrainian hacking ring known as "Rescator" broke into Target Corporation computers in 2013, stealing roughly 40 million credit cards, and then Home Depot computers in 2014, stealing between 53 and 56 million credit card numbers. Warnings were delivered at both corporations, but ignored; physical security breaches using self checkout machines are believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existing antivirus software had administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing.

2.9.8.7 Office of Personnel Management data breach

In April 2015, the Office of Personnel Management discovered it had been hacked more than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office.[149] The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States.[150] Data targeted in the breach included personally identifiable information such as Social Security Numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check. It is believed the hack was perpetrated by Chinese hackers.

2.9.8.8 Ashley Madison breach

In July 2015, a hacker group known as "The Impact Team" successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently." When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained functioning. 

 

2.9.9 Legal issues and global regulation

International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cyber crimes and cyber criminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece of malware or form of cyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute. Proving attribution for cyber crimes and cyber attacks is also a major problem for all law enforcement agencies. "Computer viruses switch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."The use of techniques such as dynamic DNS, fast flux and bullet proof servers add to the difficulty of investigation and enforcement. 
 

2.9.10 Role of government

The role of the government is to make regulations to force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the national power-grid.

Government's regulatory role in cyberspace is complicated. For some, cyberspace was seen virtual space that was to remain free of government intervention, as can be seen in many of today's libertarian blockchain and bitcoin discussions.

Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem. R. Clarke said during a panel discussion at the RSA Security Conference in San Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through."On the other hand, executives from the private sector agree that improvements are necessary, but think that the government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order.
 

2.9.11 International actions

Many different teams and organisations exist, including:

  • The Forum of Incident Response and Security Teams (FIRST) is the global association of CSIRTs.[161] The US-CERT, AT&T, Apple, Cisco, McAfee, Microsoft are all members of this international team.
  • The Council of Europe helps protect societies worldwide from the threat of cybercrime through the Convention on Cybercrime.
  • The purpose of the Messaging Anti-Abuse Working Group (MAAWG) is to bring the messaging industry together to work collaboratively and to successfully address the various forms of messaging abuse, such as spam, viruses, denial-of-service attacks and other messaging exploitations.[164] France Telecom, Facebook, AT&T, Apple, Cisco, Sprint are some of the members of the MAAWG.
  • ENISA : The European Network and Information Security Agency (ENISA) is an agency of the European Union with the objective to improve network and information security in the European Union.

Europe

On 14 April 2016 the European Parliament and Council of the European Union adopted The General Data Protection Regulation (GDPR) (EU) 2016/679. GDPR, which became enforceable beginning 25 May 2018, provides for data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA). GDPR requires that business processes that handle personal data be built with data protection by design and by default. GDPR also requires that certain organizations appoint a Data Protection Officer (DPO). 

 

2.9.12 National actions
 

2.9.12.1 Computer emergency response teams

Most countries have their own computer emergency response team to protect network security. 

2.9.12.2 Canada

Since 2010, Canada has had a Cyber Security Strategy. This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure. The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online. There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident.

The Canadian Cyber Incident Response Centre (CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond and recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors. It posts regular cyber security bulletins and operates an online reporting tool where individuals and organizations can report a cyber incident.

To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations, and launched the Cyber Security Cooperation Program. They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October.

Public Safety Canada aims to begin an evaluation of Canada's Cyber Security Strategy in early 2015.

2.9.12.3 China

China's Central Leading Group for Internet Security and Informatization (Chinese: 中央网络安全和信息化领导小组) was established on 27 February 2014. This Leading Small Group (LSG) of the Communist Party of China is headed by General Secretary Xi Jinping himself and is staffed with relevant Party and state decision-makers. The LSG was created to overcome the incoherent policies and overlapping responsibilities that characterized China's former cyberspace decision-making mechanisms. The LSG oversees policy-making in the economic, political, cultural, social and military fields as they relate to network security and IT strategy. This LSG also coordinates major policy initiatives in the international arena that promote norms and standards favored by the Chinese government and that emphasize the principle of national sovereignty in cyberspace.

2.9.12.4 Germany

Berlin starts National Cyber Defense Initiative: On 16 June 2011, the German Minister for Home Affairs, officially opened the new German NCAZ (National Center for Cyber Defense) Nationales Cyber-Abwehrzentrum located in Bonn. The NCAZ closely cooperates with BSI (Federal Office for Information Security) Bundesamt für Sicherheit in der Informationstechnik, BKA (Federal Police Organisation) Bundeskriminalamt (Deutschland), BND (Federal Intelligence Service) Bundesnachrichtendienst, MAD (Military Intelligence Service) Amt für den Militärischen Abschirmdienst and other national organisations in Germany taking care of national security aspects. According to the Minister the primary task of the new organization founded on 23 February 2011, is to detect and prevent attacks against the national infrastructure and mentioned incidents like Stuxnet. 

2.9.12.5 India

Some provisions for cyber security have been incorporated into rules framed under the Information Technology Act 2000.

The National Cyber Security Policy 2013 is a policy framework by Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyber attacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data". CERT- In is the nodal agency which monitors the cyber threats in the country. The post of National Cyber Security Coordinator has also been created in the Prime Minister's Office (PMO).

The Indian Companies Act 2013 has also introduced cyber law and cyber security obligations on the part of Indian directors. Some provisions for cyber security have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013.

2.9.12.6 South Korea

Following cyber attacks in the first half of 2013, when the government, news media, television station, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011, and 2012, but Pyongyang denies the accusations.

2.9.12.7 United States

Legislation

The 1986 18 U.S.C., the Computer Fraud and Abuse Act is the key legislation. It prohibits unauthorized access or damage of "protected computers" as defined in 18 U.S.C. Although various other measures have been proposed – none has succeeded.

In 2013, executive order 13636 Improving Critical Infrastructure Cybersecurity was signed, which prompted the creation of the NIST Cybersecurity Framework 

 

Standardized Government Testing Services

The General Services Administration (GSA) has standardized the "penetration test" service as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS) and are listed at the US GSA Advantage website. See more information here: Penetration test: Standardized government penetration test services. 

Agencies

The Department of Homeland Security has a dedicated division responsible for the response system, risk management program and requirements for cybersecurity in the United States called the National Cyber Security Division. The division is home to US-CERT operations and the National Cyber Alert System.The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.

The third priority of the Federal Bureau of Investigation (FBI) is to: "Protect the United States against cyber-based attacks and high-technology crimes", and they, along with the National White Collar Crime Center (NW3C), and the Bureau of Justice Assistance (BJA) are part of the multi-agency task force, The Internet Crime Complaint Center, also known as IC3.

In addition to its own specific duties, the FBI participates alongside non-profit organizations such as InfraGard.

In the criminal division of the United States Department of Justice operates a section called the Computer Crime and Intellectual Property Section. The CCIPS is in charge of investigating computer crime and intellectual property crime and is specialized in the search and seizure of digital evidence in computers and networks. In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C.)."

The United States Cyber Command, also known as USCYBERCOM, is tasked with the defense of specified Department of Defense information networks and ensures "the security, integrity, and governance of government and military IT infrastructure and assets."It has no role in the protection of civilian networks.

The U.S. Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.

The Food and Drug Administration has issued guidance for medical devices, and the National Highway Traffic Safety Administration is concerned with automotive cybersecurity. After being criticized by the Government Accountability Office,[200] and following successful attacks on airports and claimed attacks on airplanes, the Federal Aviation Administration has devoted funding to securing systems on board the planes of private manufacturers, and the Aircraft Communications Addressing and Reporting System.[201] Concerns have also been raised about the future Next Generation Air Transportation System.

Computer emergency readiness team

"Computer emergency response team" is a name given to expert groups that handle computer security incidents. In the US, two distinct organization exist, although they do work closely together.

  • US-CERT: part of the National Cyber Security Division of the United States Department of Homeland Security. 
  • CERT/CC: created by the Defense Advanced Research Projects Agency (DARPA) and run by the Software Engineering Institute (SEI).

 

2.9.13 Modern warfare

There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton from the Christian Science Monitor described in an article titled "The New Cyber Arms Race":

In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.

This has led to new terms such as cyberwarfare and cyberterrorism. The United States Cyber Command was created in 2009 and many other countries have similar forces.

 

2.9.14 Careers

Cybersecurity is a fast-growing field of IT concerned with reducing organizations' risk of hack or data breach.[206] According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015. Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail. However, the use of the term "cybersecurity" is more prevalent in government job descriptions.

Typical cyber security job titles and descriptions include:

Security analyst

Analyzes and assesses vulnerabilities in the infrastructure (software, hardware, networks), investigates using available tools and countermeasures to remedy the detected vulnerabilities, and recommends solutions and best practices. Analyzes and assesses damage to the data/infrastructure as a result of security incidents, examines available recovery tools and processes, and recommends solutions. Tests for compliance with security policies and procedures. May assist in the creation, implementation, or management of security solutions.

Security engineer

Performs security monitoring, security and data/logs analysis, and forensic analysis, to detect security incidents, and mounts the incident response. Investigates and utilizes new technologies and processes to enhance security capabilities and implement improvements. May also review code or perform other security engineering methodologies.

Security architect

Designs a security system or major components of a security system, and may head a security design team building a new security system.

Security administrator

Installs and manages organization-wide security systems. This position may also include taking on some of the tasks of a security analyst in smaller organizations.

Chief Information Security Officer (CISO)

A high-level management position responsible for the entire information security division/staff. The position may include hands-on technical work.

Chief Security Officer (CSO)

A high-level management position responsible for the entire security division/staff. A newer position now deemed needed as security risks grow.

Security Consultant/Specialist/Intelligence

Broad titles that encompass any one or all of the other roles or titles tasked with protecting computers, networks, software, data or information systems against viruses, worms, spyware, malware, intrusion detection, unauthorized access, denial-of-service attacks, and an ever increasing list of attacks by hackers acting as individuals or as part of organized crime or foreign governments.

Student programs are also available to people interested in beginning a career in cybersecurity. Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts. A wide range of certified courses are also available.

In the United Kingdom, a nationwide set of cyber security forums, known as the U.K Cyber Security Forum, were established supported by the Government's cyber security strategy in order to encourage start-ups and innovation and to address the skills gap identified by the U.K Government. 

 

2.9.15 Terminology

The following terms used with regards to computer security are explained below:

  • Access authorization restricts access to a computer to a group of users through the use of authentication systems. These systems can protect either the whole computer, such as through an interactive login screen, or individual services, such as a FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, smart cards, and biometric systems.
  • Anti-virus software consists of computer programs that attempt to identify, thwart, and eliminate computer viruses and other malicious software (malware).
  • Applications are executable code, so general practice is to disallow users the power to install them; to install only those which are known to be reputable – and to reduce the attack surface by installing as few as possible. They are typically run with least privilege, with a robust process in place to identify, test and install any released security patches or updates for them.
  • Authentication techniques can be used to ensure that communication end-points are who they say they are.
  • Automated theorem proving and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications.
  • Backups are one or more copies kept of important computer files. Typically, multiple copies will be kept at different locations so that if a copy is stolen or damaged, other copies will still exist.
  • Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. Capabilities vs. ACLs discusses their use.
  • Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
  • Confidentiality is the nondisclosure of information except to another authorized person.[219]
  • Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified.
  • Cyberwarfare is an Internet-based conflict that involves politically motivated attacks on information and information systems. Such attacks can, for example, disable official websites and networks, disrupt or disable essential services, steal or alter classified data, and cripple financial systems.
  • Data integrity is the accuracy and consistency of stored data, indicated by an absence of any alteration in data between two updates of a data record.
  • Encryption is used to protect the confidentiality of a message. Cryptographically secure ciphers are designed to make any practical attempt of breaking them infeasible. Symmetric-key ciphers are suitable for bulk encryption using shared keys, and public-key encryption using digital certificates can provide a practical solution for the problem of securely communicating when no key is shared in advance.
  • Endpoint security software aids networks in preventing malware infection and data theft at network entry points made vulnerable by the prevalence of potentially infected devices such as laptops, mobile devices, and USB drives.[221]
  • Firewalls serve as a gatekeeper system between networks, allowing only traffic that matches defined rules. They often include detailed logging, and may include intrusion detection and intrusion prevention features. They are near-universal between company local area networks and the Internet, but can also be used internally to impose traffic rules between networks if network segmentation is configured.
  • A hacker is someone who seeks to breach defenses and exploit weaknesses in a computer system or network.
  • Honey pots are computers that are intentionally left vulnerable to attack by crackers. They can be used to catch crackers and to identify their techniques.
  • Intrusion-detection systems are devices or software applications that monitor networks or systems for malicious activity or policy violations.
  • A microkernel is an approach to operating system design which has only the near-minimum amount of code running at the most privileged level – and runs other elements of the operating system such as device drivers, protocol stacks and file systems, in the safer, less privileged user space.
  • Pinging. The standard "ping" application can be used to test if an IP address is in use. If it is, attackers may then try a port scan to detect which services are exposed.
  • A port scan is used to probe an IP address for open ports to identify accessible network services and applications.
  • A Key logger is spyware silently captures and stores each keystroke that a user types on the computer's keyboard.
  • Social engineering is the use of deception to manipulate individuals to breach security.
  • Logic bombs is a type of malware added to a legitimate program that lies dormant until it is triggered by a specific event.

 

2.9.16 Scholars

  • Ross J. Anderson
  • Annie Anton
  • Adam Back
  • Daniel J. Bernstein
  • Matt Blaze
  • Stefan Brands
  • L. Jean Camp
  • Lance Cottrell
  • Lorrie Cranor
  • Dorothy E. Denning
  • Peter J. Denning
  • Cynthia Dwork
  • Deborah Estrin
  • Joan Feigenbaum
  • Ian Goldberg
  • Shafi Goldwasser
  • Lawrence A. Gordon
  • Peter Gutmann
  • Paul Kocher
  • Monica S. Lam
  • Butler Lampson
  • Brian LaMacchia
  • Carl Landwehr
  • Kevin Mitnick
  • Peter G. Neumann
  • Susan Nycum
  • Roger R. Schell
  • Bruce Schneier
  • Dawn Song
  • Gene Spafford
  • Salvatore J. Stolfo
  • Willis Ware
  • Moti Yung

 

2.10 Digital Talent Management, Engagement, and Culture

Company success links directly to what talent management, employee engagement and organizational culture have in common. The causal link among the three elements is powerful. It’s much like a rowing crew’s connection among rowers, oars, and scull.

Previously defined, talent management is an organization’s commitment to recruit, retain, and develop the most talented and superior employees available.

That commitment is enhanced by effective employee engagement, a buzz-phrase for the past several years. Employee engagement is the individual’s investment of her/his time, energy, skills, knowledge, and creativity in the efforts and directions set by the organization.

Organizational culture contributes to a business’s employee engagement. We define organizational culture as the values and behaviors that contribute to the unique social and psychological environment of an organization.

Your company’s culture offers critical engagement factors. These factors impact the three talent management components: recruitment, retention and development.
 

2.10.1 Talent Management: Recruitment

Recruitment currently targets those in Generation Y, the Millennial generation. Recruitment is a talent candidate’s first contact with your company. Recruitment should positively engage that candidate from the get-go. Organizational culture has a say in how you recruit, and therefore in how (well) you engage. Consider this about Millennials:

  • They seek work that is social. They are technologically savvy. They want jobs that motivate by time off and job satisfaction, rather than just by compensation.
  • They appreciate recruitment via use of social media. They expect personalized attention. They anticipate internet-speed responsiveness.

How does your company’s recruitment process and procedure measure up?
 

2.10.2 Talent Management: Retention

Retention remains the money-saver to talent management. It is costly to hire, onboard, and bring a new hire up to speed. Strong employee engagement delivers stronger employee retention.

The SilkRoad Talent Talk Report 2014 states: “…in an unpredictable financial climate, companies need loyal, productive, and engaged employees more than ever. Employee engagement emerged as the most pressing concern…” Indeed, 53% of the 3,700 survey respondents indicated their company lacked an attractive culture to engage employees.

A company culture that offers, encourages, and maintains engagement by employees impacts every individual. Baby Boomers savor a workplace in which they can engage their energies and values. Gen Y workers relish a company that recognizes their independent skills. Generations in between approve of the chance to engage for their own reasons.

What salient employee engagement factors does your business culture provide?
 

2.10.3 Talent Management: Development

Development is significant action played by talent management. Developing employees from Day 1 throughout their time of service demonstrates company commitment. That commitment, perhaps greater than any other offering, stimulates employee engagement. The commitment to such development can be a cornerstone value of a company’s culture.

Employees have always requested, accepted and appreciated training, education, mentoring and development. They have asked welcomed opportunities to engage in personal and professional improvement. Consider the variety of ways an organization may satisfy that engagement:

  • Training that is job-specific or professionally generic.
  • Coaching and/or mentoring.
  • Formal education through university partnerships, tuition reimbursement, and online credits.
  • Professional associations and conferences.

Does your company offer developmental opportunities in each of these categories?

The connection is clear. Organizational culture can generate employee engagement. Employee engagement can support the three legs of talent management. They have in common a shared contribution to your business’s successful competition.


 

 

 

2.11 Management and Leadership

The main difference between leaders and managers is that leaders have people follow them while managers have people who work for them. A successful business owner needs to be both a strong leader and manager to get their team on board to follow them towards their vision of success.

Is a good manager automatically a good leader? What is the difference between leadership and management?

The main difference between leaders and managers is that leaders have people follow them while managers have people who work for them.

A successful business owner needs to be both a strong leader and manager to get their team on board to follow them towards their vision of success. Leadership is about getting people to understand and believe in your vision and to work with you to achieve your goals while managing is more about administering and making sure the day-to-day things are happening as they should.

WHILE THERE ARE MANY TRAITS THAT MAKE UP A STRONG LEADER, SOME OF THE KEY CHARACTERISTICS ARE:

  • Honesty & Integrity: are crucial to get your people to believe you and buy in to the journey you are taking them on
  • Vision: know where you are, where you want to go and enroll your team in charting a path for the future
  • Inspiration: inspire your team to be all they can by making sure they understand their role in the bigger picture
  • Ability to Challenge: do not be afraid to challenge the status quo, do things differently and have the courage to think outside the box
  • Communication Skills: keep your team informed of the journey, where you are, where you are heading and share any roadblocks you may encounter along the way

 

SOME OF THE COMMON TRAITS SHARED BY STRONG MANAGERS ARE:

  • Being Able to Execute a Vision: take a strategic vision and break it down into a roadmap to be followed by the team
  • Ability to Direct: day-to-day work efforts, review resources needed and anticipate needs along the way
  • Process Management: establish work rules, processes, standards and operating procedures
  • People Focused: look after your people, their needs, listen to them and involve them

In order for you to engage your staff in providing the best service to your guests, clients or partners, you must enroll them in your vision and align their perceptions and behaviours. You need to get them excited about where you are taking them while making sure they know what’s in it for them. With smaller organizations, the challenge lies in making sure you are both leading your team as well as managing your day to day operation. Those who are able to do both, will create a competitive advantage. Are you both a leader and a manager; what would your staff say if you were to ask them?
 

 

2.12 Change Management

Change management is a collective term for all approaches to prepare, support and help individuals, teams, and organizations in making organizational change.

Change management (sometimes abbreviated as CM) is a collective term for all approaches to prepare , support and help individuals, teams, and organizations in making organizational change. The most common change drivers include: technological evolution, process reviews, crisis, and consumer habit changes; pressure from new business entrants, acquisitions, mergers, and organizational restructuring. It includes methods that redirect or redefine the use of resources, business process, budget allocations, or other modes of operation that significantly change a company or organization. Organizational change management (OCM) considers the full organization and what needs to change, while change management may be used solely to refer to how people and teams are affected by such organizational transition. It deals with many different disciplines, from behavioral and social sciences to information technology and business solutions.

In a project-management context, the term "change management" may be used as an alternative to change control processes where in changes to the scope of a project are formally introduced and approved.
 

2.12.1 Approach

Organizational change management employs a structured approach to ensure that changes are implemented smoothly and successfully to achieve lasting benefits.

2.12.1.1 Reasons for change

Globalization and constant innovation of technology result in a constantly evolving business environment. Phenomena such as social media and mobile adaptability have revolutionized business and the effect of this is an ever-increasing need for change, and therefore change management. The growth in technology also has a secondary effect of increasing the availability and therefore accountability of knowledge. Easily accessible information has resulted in unprecedented scrutiny from stockholders and the media and pressure on management. With the business environment experiencing so much change, organizations must then learn to become comfortable with change as well. Therefore, the ability to manage and adapt to organizational change is an essential ability required in the workplace today. Yet, major and rapid organizational change is profoundly difficult because the structure, culture, and routines of organizations often reflect a persistent and difficult-to-remove "imprint" of past periods, which are resistant to radical change even as the current environment of the organization changes rapidly.

Due to the growth of technology, modern organizational change is largely motivated by exterior innovations rather than internal factors. When these developments occur, the organizations that adapt quickest create a competitive advantage for themselves, while the companies that refuse to change get left behind. This can result in drastic profit and/or market share losses. Organizational change directly affects all departments and employees. The entire company must learn how to handle changes to the organization. The effectiveness of change management can have a strong positive or negative impact on employee morale. 

2.12.1.2 Change models

There are several models of change management:

John Kotter's 8-Step Process for Leading Change

Dr. John P. Kotter, the Konosuke Matsushita Professor of Leadership, Emeritus, at the Harvard Business School, invented the 8-Step Process for Leading Change. It consists of eight stages:

  • Create a Sense of Urgency
  • Build a Guiding Coalition
  • Form a Strategic Vision and Initiatives
  • Enlist a Volunteer Army
  • Enable Action by Removing Barriers
  • Generate Short-Term Wins
  • Sustain Acceleration
  • Institute Change

Change Management Foundation and Model

The Change Management Foundation is shaped like a pyramid with project management managing technical aspects and people implementing change at the base and leadership setting the direction at the top. The Change Management Model consists of four stages:

  • Determine Need for Change
  • Prepare & Plan for Change
  • Implement the Change
  • Sustain the Change

 

2.12.1.3 The Plan-Do-Check-Act Cycle, and choosing which changes to implement

The Plan-Do-Check-Act Cycle, created by W. Edwards Deming, is a management method to improve business method for control and continuous improvement of choosing which changes to implement. When determining which of the latest techniques or innovations to adopt, there are four major factors to be considered:

  • Levels, goals, and strategies
  • Measurement system
  • Sequence of steps
  • Implementation and organizational changes
2.12.1.4 Managing the change process

Although there are many types of organizational changes, the critical aspect is a company's ability to win the buy-in of their organization's employees on the change. Effectively managing organizational change is a four-step process:

  • Recognizing the changes in the broader business environment
  • Developing the necessary adjustments for their company's needs
  • Training their employees on the appropriate changes
  • Winning the support of the employees with the persuasiveness of the appropriate adjustments

As a multi-disciplinary practice that has evolved as a result of scholarly research, organizational change management should begin with a systematic diagnosis of the current situation in order to determine both the need for change and the capability to change. The objectives, content, and process of change should all be specified as part of a change management plan. Change management processes should include creative marketing to enable communication between changing audiences, as well as deep social understanding about leadership styles and group dynamics. As a visible track on transformation projects, organizational change management aligns groups' expectations, integrates teams, and manages employee-training. It makes use of performance metrics, such as financial results, operational efficiency, leadership commitment, communication effectiveness, and the perceived need for change in order to design appropriate strategies, resolve troubled change projects, and avoid change failures.


2.12.1.5 Factors of successful change management

Successful change management is more likely to occur if the following are included:

  • Define measurable stakeholder aims and create a business case for their achievement (which should be continuously updated)
  • Monitor assumptions, risks, dependencies, costs, return on investment, dis-benefits and cultural issues
  • Effective communication that informs various stakeholders of the reasons for the change (why?), the benefits of successful implementation (what is in it for us, and you) as well as the details of the change (when? where? who is involved? how much will it cost? etc.)
  • Devise an effective education, training and/or skills upgrading scheme for the organization
  • Counter resistance from the employees of companies and align them to overall strategic direction of the organization
  • Provide personal counseling (if required) to alleviate any change-related fears
  • Monitoring implementation and fine-tuning as and when required
     

2.12.2 Challenges

Change management is faced with the fundamental difficulties of integration and navigation, and human factors. Change management must also take into account the human aspect where emotions and how they are handled play a significant role in implementing change successfully.

Integration

Traditionally, organizational development (OD) departments overlooked the role of infrastructure and the possibility of carrying out change through technology. Now, managers almost exclusively focus on the structural and technical components of change. Alignment and integration between strategic, social, and technical components requires collaboration between people with different skill-sets.

Navigation

Managing change over time, referred to as navigation, requires continuous adaptation. It requires managing projects over time against a changing context, from inter-organizational factors to marketplace volatility. It also requires a balance in bureaucratic organizations between top-down and bottom-up management, ensuring employee empowerment and flexibility.

Human factors

One of the major factors which hinders the change management process is people's natural tendency for inertia. Just as in Newton's first law of motion, people are resistant to change in organisations because it can be uncomfortable. The notion of doing things this way, because 'this is the way we have always done them', can be particularly hard to overcome. Furthermore, in cases where a company has seen declining fortunes, for a manager or executive to view themselves as a key part of the problem can be very humbling. This issue can be exacerbated in countries where "saving face" plays a large role in inter-personal relations.

To assist with this, a number of models have been developed which help identify their readiness for change and then to recommend the steps through which they could move. A common example is ADKAR, an acronym that stands for awareness, desire, knowledge, ability, and reinforcement. Whichever is the first level that does not apply to an individual, team, or organization is the first step to complete in helping them change.
 

 

2.13 Lean Management

Lean manufacturing or lean production is a systematic method originating in the Japanese manufacturing industry for the minimization of waste within a manufacturing system without sacrificing productivity, which can cause problems. 

 

2.13.1 What is Lean?

Lean management is an approach to running an organisation that supports the concept of continuous improvement. It is an ongoing effort to improve products, services, or processes, which require “incremental” improvement over time in order to increase efficiency and quality.

Lean management uses methods for eliminating factors that waste time, effort or money. This is accomplished by analysing a business process and then revising it or cutting out any steps that do not create value for customers.

Lean management principles are derived from the Japanese manufacturing industry and include:

  1. Defining value from the standpoint of the end customer.
  2. Identifying each step in a business process and eliminating those steps that do not create value.
  3. Making the value-creating steps occur in tight sequence.
  4. Repeating the first three steps on a continuous basis until all waste has been eliminated.

These lean principles ensure that the processes involved with bringing a product to market remain cost effective from beginning to end.

Lean production or lean manufacturing is a systematic method for the elimination of wastes within a manufacturing process. This may include wastes created through unevenness in work loads, overburden and any work that does not add value. From the point of view of the customer who consumes a service or product, “value” is any process or action that a client would be willing to pay for. In essence, lean is focus on making obvious what appends value by decreasing everything else.

 

2.13.2 History of Lean Management

The avoidance of waste has a long history within the manufacturing industry. In fact, many of the concepts now seen as key to lean have been discovered and rediscovered over the years by others in their search to reduce waste. Lean manufacturing, as management philosophy, came mostly from the Toyota Production System (TPS). The term “lean” was first introduced in article “Triumph of the Lean Production System” written by John Krafcik in 1988.  Article was based on his master’s thesis at the MIT Sloan School of Management. Before his studies Krafcik had worked as a quality engineer in the Toyota-GM NUMMI.

Kiichiro Toyoda, founder of Toyota Motor Corporation, directed the engine casting work and discovered many problems in their manufacturing process. In 1936 his processes hit new problems and he developed the “Kaizen” improvement teams. Toyota’s view is that the main method of lean is not the tools, but the reduction of three types of waste:

  1. muda (“non-value-adding work”)
  2. muri (“overburden”)
  3. mura (“unevenness”)

This aids in exposing problems systematically and makes it easier to use the right tools where the ideal cannot be achieved. Taiichi Ohno is the Japanese industrial engineer and businessman, considered to be the father of the Toyota Production System, who offered to focus on reduction of the original Toyota seven wastes to improve overall customer value.

 

2.13.3 The Lean Management Tools

Many elements within the concept of the lean manufacturing stand out and each of these presents a particular method:

  • 5S
  • Kanban (pull systems)
  • Value Stream Mapping
  • SMED
  • Poka-yoke (error-proofing)
  • Elimination of Time Batching
  • Total Productive Maintenance
  • Mixed Model Processing
  • Single Point Scheduling
  • Rank Order Clustering
  • Multi-process Handling
  • Redesigning Working Cells
  • Control Charts (for checking mura)

Some of these methods claim to be an independent manufacturing concept (such as kaizen and kanban).

2.13.3.1 Kaizen 5S

One way to cope with wastes and effectively increase profitability is a 5s. The name of this method uses a list of five words, which all start with the letter “S”: straighten, sort, standardize, shine, and sustain. It is translation from original Japanese words: seiton, seiri, seiketsu, seiso, and shitsuke. These words describe ways of workspace organisation for achieving the most effectiveness and efficiency. It include identifying and storing of the used items, maintaining the items and area, and sustaining the new order.

5S has become a fundamental business measure and key driver for Kaizen. The 5 Steps are as follows:

  • Sort: Sort out and separate what is needed and not needed within the area.
  • Straighten: Arrange items that are needed so that they are ready and easy to use. Clearly identify locations for all items so that anyone can find them and return them once the task is completed.
  • Shine: Clean the workplace and equipment on a regular basis in order to maintain standards and identify defects.
  • Standardise: Revisit the first three of the 5S on a frequent basis and confirm the condition of the Gemba using standard procedures.
  • Sustain: Keep to the rules in order to maintain the standard and continuously improve every day.
     
2.13.3.2 Kanban

The other way to waste reduction was a kanban. In 1952, Taiichi Ohno invented a kanban system at Toyota, as a system to improve and maintain a high level of production.

Kanban became an effective tool to support running a production system as a whole, and an excellent way to promote improvement. One of the main benefits of kanban system is to establish an upper limit to the work in progress inventory, avoiding overloading of the manufacturing system. The concept of “to do” – “doing” – “done” became the cornerstone of many online tools used for managing projects and control workflows. Kanbanchi is one of  such tools that supports kanban methodology, if you are interested, you can try to use it now.

Taiichi Ohno stated that, to be effective, kanban must follow strict rules of use. Toyota, for example, has six simple rules and close monitoring of these rules is a never-ending task. This ensures that the kanban system does what is required:

  • A Later process picks up the number of items indicated by the kanban at the earlier process.
  • The Earlier process produces items in the quantity and sequence indicated by the kanban.
  • No items are made or transported without a kanban.
  • Always attach a kanban to the goods.
  • Defective products are not sent on to the subsequent process. The result is 100% defect-free goods.
  • Reducing the number of kanban increases the sensitivity.
     
2.13.3.3 Value Stream Mapping

Value stream mapping is a method of lean management which is applicable for almost any value chain. It is used for analysing the current stage and designing of its further stages for the series of events that take a service or product from the beginning through to the client. This method can be used also in:

  • Logistics
  • Supply Chain
  • Service Related Industries
  • Healthcare
  • Software Development
  • Product Development
 

Value stream mapping usually employs standard symbols to represent items and processes. A VSM is best created by using a pencil and drawing by hand on a sheet of A3 paper, as you will need to make frequent corrections and changes. Even in the late 1990s these techniques were largely unknown outside of Toyota.

Perhaps VSM is a relatively recent addition to the TPS toolbox. John Shook and Mike Rother co-authored the book “Learning to See”, published by the Lean Enterprise Institute. This is what made the material and information flow widely accessible for application outside of Toyota.

Value stream mapping is a flexible tool that lets us put all of the information into one place in a manner that is not possible with process mapping or other tools. Read this article to learn about creating a value stream map.

 

2.13.3.4 Six Sigma

Sigma is a mathematical term that measures a process deviation from perfection. Like Kaizen, Six Sigma is a management philosophy focused on making continuous improvements and bringing improvements into various processes. It was first introduced in 1986 by Bill Smith at Motorola.

Unlike Kaizen, which has the primary goal of increasing efficiency of all aspects of processes, Six Sigma focuses on improving quality of the final product by finding and eliminating causes of defects. Six Sigma uses more statistical analyses than Kaizen and aims for as close to zero defects as possible. A sigma rating describes the maturity of a manufacturing process by indicating its percentage or yield of defect-free products it creates. Organizations need to determine an appropriate sigma level for each of their most important processes and strive to achieve these.

The core tool used to drive Six Sigma projects is the DMAIC improvement cycle. DMAIC is an abbreviation of the five improvement steps it comprises: Define, Measure, Analyze, Improve and Control. All of the DMAIC process steps are required and always proceed in the given order.

DMAIC refers to a data-driven improvement cycle used for improving, optimizing and stabilizing business processes and designs. DMAIC is not exclusive to Six Sigma and can be used as the framework for other improvement applications.

The Six Sigma concept asserts:

  • Achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management.
  • Manufacturing and business processes have characteristics that can be measured, analyzed, controlled and improved.
  • Continuous efforts to achieve stable and predictable process results (i.e., reduce process variation) are of vital importance to business success.

Apart from previous features Six Sigma sets the following quality improvement initiatives:

  • A clear commitment to making decisions on the basis of verifiable data and statistical methods, rather than assumptions and guesswork.
  • An increased emphasis on strong and passionate management leadership and support.
  • A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma project.

At the end of the 1990s over 60% of organisations with a Fortune 500 status started to apply Six Sigma. Motorola has declared about $17 billion in savings since 2006 as a direct result of implementing Six Sigma.

In recent years by efforts of some practitioners there was created the methodology which combines Six Sigma ideas with lean manufacturing. The Lean Six Sigma methodology includes Six Sigma, with its focus on variation and design, and lean manufacturing, which addresses process flow and waste issues as complementary disciplines aimed at promoting business and operational effectiveness.
 

 

2.14 Advanced IT Management

Modern IT Management Roadmap: Driving Transformation from Legacy to the Cloud. Business transformation through technology is happening faster and more pervasively than any other point in history. In today’s connected, mobile, application-based world, organizations must constantly innovate to satisfy the appetite for customers who expect quick and reliable service at their fingertips, on any device they choose. With the latest advancements in cloud technology, the ability to modernize for better security, greater functionality, and workforce efficiency is now within reach of organizations of all sizes.

When it comes to facilitating innovation, modern IT management is essential. Gone are the days where IT was simply concerned with operations. Today, IT infrastructure is the foundation upon which a company can continue to transform products and services, accelerate operations, and empower users to do more with less.

But even with all this focus on innovation in the cloud, there still seems to be a lot of confusion around how to get from the “legacy state” to the “dream state” – a fully cloud-managed platform – while avoiding potential risks along the way, such as deployment failure, security threats, and an inability to keep up with constant change.
 

2.14.1 Common Challenges of Modern IT Management

If your business is brand new, the path to modern IT management is relatively straight-forward – start building in the cloud. However, most companies don’t have the luxury of starting with a blank slate. If you’re a well-established business, many of your mission-critical applications have been built on-premises. Over the years, they’ve become bloated or inefficient, which means they’ll still be bloated and inefficient in the cloud. Furthermore, most applications developed in a pre-cloud era are not naturally suited for cloud environments. Issues around security, compliance, and compatibility issues, especially when it involves migrating mission-critical applications, are top of mind for all businesses as they embark on their modernization journeys.

When transitioning from a legacy state to a dream state, how will you know what to keep, what to add, what to turn off, and in what order? In other words, how do you modernize for success in the future while keeping the business running today?

This is where having a roadmap to modern IT management is essential.

As you progress down the path towards modernization, you’ll inevitably need more digital experts as new innovations will require a new set of skills. For many mid-size organizations, this means partnering with a group of experts who can provide a clear, often staged approach to modern IT management in the cloud.
 

2.14.2 The Path to Modern IT Management in the Cloud

The path to modern IT management begins by assessing where your organization is in terms of legacy systems and innovation. In collaboration with various areas of the business, you can build an inventory of your existing IT inventory.
Once you know where you are, you can begin to define where you want to go. This process involves aligning the right tools and processes to your corporate strategy. It’s important that both business and IT executives and managers are involved in the processes from start to finish to ensure that decisions are based on business constraints, dependencies, priorities, and budget. This is where a digital strategists like SWC can help bridge the gap. Ultimately, your modernization management strategy should be envisioned with three main objectives in mind:

  • Reduce Cost: Reduce the number of redundant infrastructure and services and only pay for what you use.; decrease the cost of maintaining an older IT infrastructure to open up space to embark upon new innovative IT initiatives that provide greater value to the organization.
  • Mitigate Risk: Leverage the latest cybersecurity technology that includes systems which can easily be patched and updated. Proactively build cybersecurity into new technology investments.
  • Accelerate Transformation: Build a modern infrastructure in the cloud where you can more readily and easily drive your company’s mission and thrive in today’s rapidly changing business environment.
     

2.14.3 A Look at a Modern IT Management Roadmap

A modern IT management roadmap presents a comprehensive view of your organization’s technology strategies in the right sequence. While the details of each roadmap may look different for each unique organization, the following example gives you an idea of what a common path may look like to help you devise your own transformation strategy.

 

 

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

 

2.14.4 RETHINK USER EXPERIENCE & MANAGEMENT

Traditional desktop and laptop image and deployment takes up a large amount of IT resources and time. It comes with the burden of having on-premises hardware, licensing, staff expertise, and a regular cadence of management, maintenance, and troubleshooting. Self-service technology such as AutoPilot, along with Azure AD and Intune marks the start of modern user experience and management.

Modernizing Windows deployment with AutoPilot allows IT professionals to automate the deployment and management of new PCs. With just a few clicks, you can have a fully-configured device ready for use! This creates an exciting opportunity to increase efficiencies in ways that will save time, money, and ultimately free up IT resources to focus on more value-added initiatives that drive the business forward.
 

2.14.5 REMOVE INFRASTRUCTURE COST AND DEPENDENCIES

As company databases continue to expand and become more complex, running these databases on-premises means a lot of tedious management and unnecessary hardware costs. Cloud-based services like IaaS RDS is one of the bottom-most fundamental layers of cloud computing, allowing IT to stay ahead of this growth. No need to buy, rack, and stack hardware. With just one click, companies can scale up or down depending on business needs and only pay for what they use. From a business perspective, the most obvious benefit of moving infrastructure to the cloud is reduced cost and greater flexibility.

With these benefits comes the opportunity to leverage other cloud-based services and applications. We often see organizations take advantage of services such as Cloud Print, which lets you print from anywhere to any printer location with ease. Some other common applications that are the first to make their way to the cloud include Exchange Online for secure and reliable email management, as well as SharePoint Online for streamlined collaboration among colleagues.
 

2.14.6 REALIZE MODERN IT

The “Dream State” of modern IT management is a place where the business operates on a fully cloud-managed platform. For most organizations, this state is not realistic or even advisable in the present day. However, as previously noted, the cloud is where tomorrows technology will be built. Companies should begin acting now to transform outdated legacy applications to the cloud so they can more readily embrace the next level of innovation.

One area of management that we often see moving to the cloud at this stage is managing Active Directory security policies. GPO management for individual users and computers is a time-consuming task. With Intune, you can enroll devices under a cloud-based service to comply with your corporate policies and help you manage both corporate and personal (BYO) devices. Taking it a step further, organizations can ultimately ditch on-premises Active Directory completely and move to using Azure Active Directory in the cloud.
 

2.14.7 The Time for IT Modernization is Now and Always

In a roadmap to modern IT management, it’s important to note that the transformation is equally as much about culture as it is about technology. The tools and practices outlined above are part of the transformation, but the true measure of success is how well your people can embrace the change that comes with rapid innovation. And once you reach your destination, the journey continues.

The time for IT modernization is now and always. It is the “continuous evolution of an organization’s existing application and infrastructure software, with the goal of aligning IT with the organization’s ever-shifting business strategies.” SWC works closely with organizations to define what they want to achieve and work directly with both business and IT leaders to set clear objectives for their IT modernization projects.

If you’re interested in learning more about IT Modernization strategies, contact us. You may also be interested in our latest guide: Modern IT: Keeping Pace in a Cloud-First World: an in-depth look at why and how companies are redefining their IT support model to meet the growing needs of the business.
 

 

2.15 Entrepreneurship

Entrepreneurship is the process of designing, launching and running a new business, which is often initially a small business. The people who create these businesses are called entrepreneurs.

Entrepreneurship has been described as the "capacity and willingness to develop, organize and manage a business venture along with any of its risks in order to make a profit." While definitions of entrepreneurship typically focus on the launching and running of businesses, due to the high risks involved in launching a start-up, a significant proportion of start-up businesses have to close due to "lack of funding, bad business decisions, an economic crisis, lack of market demand, or a combination of all of these."

A broader definition of the term is sometimes used, especially in the field of economics. In this usage, an Entrepreneur is an entity which has the ability to find and act upon opportunities to translate inventions or technologies into products and services: "The entrepreneur is able to recognize the commercial potential of the invention and organize the capital, talent, and other resources that turn an invention into a commercially viable innovation." In this sense, the term "Entrepreneurship" also captures innovative activities on the part of established firms, in addition to similar activities on the part of new businesses. 

 

2.15.1 Elements of Entrepreneurship

Entrepreneurship is the act of being an entrepreneur, or "the owner or manager of a business enterprise who, by risk and initiative, attempts to make profits". Entrepreneurs act as managers and oversee the launch and growth of an enterprise. Entrepreneurship is the process by which either an individual or a team identifies a business opportunity and acquires and deploys the necessary resources required for its exploitation. Early-19th-century French economist Jean-Baptiste Say provided a broad definition of entrepreneurship, saying that it "shifts economic resources out of an area of lower and into an area of higher productivity and greater yield". Entrepreneurs create something new, something different—they change or transmute values. Regardless of the firm size, big or small, they can partake in entrepreneurship opportunities. The opportunity to become an entrepreneur requires four criteria. First, there must be opportunities or situations to recombine resources to generate profit. Second, entrepreneurship requires differences between people, such as preferential access to certain individuals or the ability to recognize information about opportunities. Third, taking on risk is a necessity. Fourth, the entrepreneurial process requires the organization of people and resources.

The entrepreneur is a factor in and the study of entrepreneurship reaches back to the work of Richard Cantillon and Adam Smith in the late 17th and early 18th centuries. However, entrepreneurship was largely ignored theoretically until the late 19th and early 20th centuries and empirically until a profound resurgence in business and economics since the late 1970s. In the 20th century, the understanding of entrepreneurship owes much to the work of economist Joseph Schumpeter in the 1930s and other Austrian economists such as Carl Menger, Ludwig von Mises and Friedrich von Hayek. According to Schumpeter, an entrepreneur is a person who is willing and able to convert a new idea or invention into a successful innovation. Entrepreneurship employs what Schumpeter called "the gale of creative destruction" to replace in whole or in part inferior innovations across markets and industries, simultaneously creating new products including new business models. In this way, creative destruction is largely responsible for the dynamism of industries and long-run economic growth. The supposition that entrepreneurship leads to economic growth is an interpretation of the residual in endogenous growth theory and as such is hotly debated in academic economics. An alternative description posited by Israel Kirzner suggests that the majority of innovations may be much more incremental improvements such as the replacement of paper with plastic in the making of drinking straws.

The exploitation of entrepreneurial opportunities may include:

  • Developing a business plan
  • Hiring the human resources
  • Acquiring financial and material resources
  • Providing leadership
  • Being responsible for both the venture's success or failure
  • Risk aversion

Economist Joseph Schumpeter (1883–1950) saw the role of the entrepreneur in the economy as "creative destruction" – launching innovations that simultaneously destroy old industries while ushering in new industries and approaches. For Schumpeter, the changes and "dynamic disequilibrium brought on by the innovating entrepreneur [were] the norm of a healthy economy". While entrepreneurship is often associated with new, small, for-profit start-ups, entrepreneurial behavior can be seen in small-, medium- and large-sized firms, new and established firms and in for-profit and not-for-profit organizations, including voluntary-sector groups, charitable organizations and government.

Entrepreneurship may operate within an entrepreneurship ecosystem which often includes:

  • Government programs and services that promote entrepreneurship and support entrepreneurs and start-ups
  • Non-governmental organizations such as small-business associations and organizations that offer advice and mentoring to entrepreneurs (e.g. through entrepreneurship centers or websites)
  • Small-business advocacy organizations that lobby governments for increased support for entrepreneurship programs and more small business-friendly laws and regulations
  • Entrepreneurship resources and facilities (e.g. business incubators and seed accelerators)
  • Entrepreneurship education and training programs offered by schools, colleges and universities
  • Financing (e.g. bank loans, venture capital financing, angel investing and government and private foundation grants)

In the 2000s, usage of the term "entrepreneurship" expanded to include how and why some individuals (or teams) identify opportunities, evaluate them as viable, and then decide to exploit them. The term has also been used to discuss how people might use these opportunities to develop new products or services, launch new firms or industries, and create wealth. The entrepreneurial process is uncertain because opportunities can only be identified after they have been exploited.

Entrepreneurs tend exhibit positive biases towards finding new possibilities and seeing unmet market needs, and a tendency towards risk-taking that makes them more likely to exploit business opportunities.

 

2.15.2 Psychological makeup
 

Stanford University economist Edward Lazear found in a 2005 study that variety in education and work experience was the most important trait that distinguished entrepreneurs from non-entrepreneurs A 2013 study by Uschi Backes-Gellner of the University of Zurich and Petra Moog of the University of Siegen in Germany found that a diverse social network was also important in distinguishing students that would go on to become entrepreneurs.

Studies show that the psychological propensities for male and female entrepreneurs are more similar than different. Empirical studies suggest that female entrepreneurs possess strong negotiating skills and consensus-forming abilities. Asa Hansson, who looked at empirical evidence from Sweden, found that the probability of becoming self-employed decreases with age for women, but increases with age for men.[109] She also found that marriage increased the probability of a person becoming an entrepreneur.

Jesper Sørensen wrote that significant influences on the decision to become an entrepreneur are workplace peers and social composition. Sørensen discovered a correlation between working with former entrepreneurs and how often these individuals become entrepreneurs themselves, compared to those who did not work with entrepreneurs. Social composition can influence entrepreneurialism in peers by demonstrating the possibility for success, stimulating a "He can do it, why can't I?" attitude. As Sørensen stated: "When you meet others who have gone out on their own, it doesn't seem that crazy".

Entrepreneurs may also be driven to entrepreneurship by past experiences. If they have faced multiple work stoppages or have been unemployed in the past, the probability of them becoming an entrepreneur increases[109] Per Cattell's personality framework, both personality traits and attitudes are thoroughly investigated by psychologists. However, in case of entrepreneurship research these notions are employed by academics too, but vaguely. According to Cattell, personality is a system that is related to the environment and further adds that the system seeks explanation to the complex transactions conducted by both—traits and attitudes. This is because both of them bring about change and growth in a person. Personality is that which informs what an individual will do when faced with a given situation. A person's response is triggered by his/her personality and the situation that is faced.

Innovative entrepreneurs may be more likely to experience what psychologist Mihaly Csikszentmihalyi calls "flow". "Flow" occurs when an individual forgets about the outside world due to being thoroughly engaged in a process or activity. Csikszentmihalyi suggested that breakthrough innovations tend to occur at the hands of individuals in that state. Other research has concluded that a strong internal motivation is a vital ingredient for breakthrough innovation.[114] Flow can be compared to Maria Montessori's concept of normalization, a state that includes a child's capacity for joyful and lengthy periods of intense concentration. Csikszentmihalyi acknowledged that Montessori's prepared environment offers children opportunities to achieve flow. Thus quality and type of early education may influence entrepreneurial capability.

Research on high-risk settings such as oil platforms, investment banking, medical surgery, aircraft piloting and nuclear power plants has related distrust to failure avoidance. When non-routine strategies are needed, distrusting persons perform better while when routine strategies are needed trusting persons perform better. This research was extended to entrepreneurial firms by Gudmundsson and Lechner.[118] They argued that in entrepreneurial firms the threat of failure is ever present resembling non-routine situations in high-risk settings. They found that the firms of distrusting entrepreneurs were more likely to survive than the firms of optimistic or overconfident entrepreneurs. The reasons were that distrusting entrepreneurs would emphasize failure avoidance through sensible task selection and more analysis. Kets de Vries has pointed out that distrusting entrepreneurs are more alert about their external environment.[119] He concluded that distrusting entrepreneurs are less likely to discount negative events and are more likely to engage control mechanisms. Similarly, Gudmundsson and Lechner found that distrust leads to higher precaution and therefore increases chances of entrepreneurial firm survival.

Researchers Schoon and Duckworth completed a study in 2012 that could potentially help identify who may become an entrepreneur at an early age. They determined that the best measures to identify a young entrepreneur are family and social status, parental role modeling, entrepreneurial competencies at age 10, academic attainment at age 10, generalized self-efficacy, social skills, entrepreneurial intention and experience of unemployment. 

2.15.2.1 Strategic

Some scholars have constructed an operational definition of a more specific subcategory called "Strategic Entrepreneurship". Closely tied with principles of strategic management, this form of entrepreneurship is "concerned about growth, creating value for customers and subsequently creating wealth for owners". A 2011 article for the Academy of Management provided a three-step, "Input-Process-Output" model of strategic entrepreneurship. The model's three steps entail the collection of different resources, the process of orchestrating them in the necessary manner and the subsequent creation of competitive advantage, value for customers, wealth and other benefits. Through the proper use of strategic management/leadership techniques and the implementation of risk-bearing entrepreneurial thinking, the strategic entrepreneur is therefore able to align resources to create value and wealth.

2.15.2.2 Leadership

Leadership in entrepreneurship can be defined as "process of social influence in which one person can enlist the aid and support of others in the accomplishment of a common task" in "one who undertakes innovations, finance and business acumen in an effort to transform innovations into economic goods". This refers to not only the act of entrepreneurship as managing or starting a business, but how one manages to do so by these social processes, or leadership skills. Entrepreneurship in itself can be defined as "the process by which individuals, teams, or organizations identify and pursue entrepreneurial opportunities without being immediately constrained by the resources they currently control". An entrepreneur typically has a mindset that seeks out potential opportunities during uncertain times. An entrepreneur must have leadership skills or qualities to see potential opportunities and act upon them. At the core, an entrepreneur is a decision maker. Such decisions often affect an organization as a whole, which is representative of their leadership amongst the organization.

With the growing global market and increasing technology use throughout all industries, the core of entrepreneurship and the decision-making has become an ongoing process rather than isolated incidents. This becomes knowledge management which is "identifying and harnessing intellectual assets" for organizations to "build on past experiences and create new mechanisms for exchanging and creating knowledge". This belief draws upon a leader's past experiences that may prove useful. It is a common mantra for one to learn from their past mistakes, so leaders should take advantage of their failures for their benefit. This is how one may take their experiences as a leader for the use in the core of entrepreneurship-decision making. 

2.15.2.3 Global leadership

The majority of scholarly research done on these topics have been from North America. Words like "leadership" and "entrepreneurship" do not always translate well into other cultures and languages. For example, in North America a leader is often thought to be charismatic, but German culture frowns on such charisma due to the charisma of Nazi leader Adolph Hitler. Other cultures, like some European countries, view the term "leader" negatively, like the French. The participative leadership style that is encouraged in the United States is considered disrespectful in many other parts of the world due to the differences in power distance. Many Asian and Middle Eastern countries do not have "open door" policies for subordinates and would never informally approach their managers/bosses. For countries like that, an authoritarian approach to management and leadership is more customary.

Despite cultural differences, the successes and failures of entrepreneurs can be traced to how leaders adapt to local conditions. With the increasingly global business environment a successful leader must be able to adapt and have insight into other cultures. To respond to the environment, corporate visions are becoming transnational in nature, to enable the organization to operate in or provide services/goods for other cultures.

 

2.15.3 Entrepreneurship training and education

Michelacci and Schivardi are a pair of researchers who believe that identifying and comparing the relationships between an entrepreneur's earnings and education level would determine the rate and level of success. Their study focused on two education levels, college degree and post-graduate degree. While Michelacci and Schivardi do not specifically determine characteristics or traits for successful entrepreneurs, they do believe that there is a direct relationship between education and success, noting that having a college knowledge does contribute to advancement in the workforce.

Michelacci and Schivardi state there has been a rise in the number of self-employed people with a baccalaureate degree. However, their findings also show that those who are self-employed and possess a graduate degree has remained consistent throughout time at about 33 percent. They briefly mention those famous entrepreneurs like Steve Jobs and Mark Zuckerberg who were college dropouts, but they call these cases all but exceptional as it is a pattern that many entrepreneurs view formal education as costly, mainly because of the time that needs to be spent on it. Michelacci and Schivardi believe that in order for an individual to reach the full success they need to have education beyond high school. Their research shows that the higher the education level the greater the success. The reason is that college gives people additional skills that can be used within their business and to operate on a higher level than someone who only "runs" it. 
 

2.15.4 Resources and financing

2.15.4.1 Entrepreneurial resources

An entrepreneurial resource is any company-owned asset that has economic value creating capabilities. Economic value creating both tangible and intangible sources are considered as entrepreneurial resources. Their economic value is generating activities or services through mobilization by entrepreneurs. Entrepreneurial resources can be divided into two fundamental categories: tangible and intangible resources.

Tangible resources are material sources such as equipment, building, furniture, land, vehicle, machinery, stock, cash, bond and inventory that has a physical form and can be quantified. On the contrary, intangible resources are nonphysical or more challenging to identify and evaluate, and they possess more value creating capacity such as human resources including skills and experience in a particular field, organizational structure of the company, brand name, reputation, entrepreneurial networks that contribute to promotion and financial support, know-how, intellectual property including both copyrights, trademarks and patents.

2.15.4.2 Bootstrapping

At least early on, entrepreneurs often "bootstrap-finance" their start-up rather than seeking external investors from the start. One of the reasons that some entrepreneurs prefer to "bootstrap" is that obtaining equity financing requires the entrepreneur to provide ownership shares to the investors. If the start-up becomes successful later on, these early equity financing deals could provide a windfall for the investors and a huge loss for the entrepreneur. If investors have a significant stake in the company, they may as well be able to exert influence on company strategy, chief executive officer (CEO) choice and other important decisions. This is often problematic since the investor and the founder might have different incentives regarding the long-term goal of the company. An investor will generally aim for a profitable exit and therefore promotes a high-valuation sale of the company or IPO in order to sell their shares. Whereas the entrepreneur might have philanthropic intentions as their main driving force. Soft values like this might not go well with the short-term pressure on yearly and quarterly profits that publicly traded companies often experience from their owners.

One consensus definition of bootstrapping sees it as "a collection of methods used to minimize the amount of outside debt and equity financing needed from banks and investors". The majority of businesses require less than $10,000 to launch, which means that personal savings are most often used to start. In addition, bootstrapping entrepreneurs often incur personal credit-card debt, but they also can utilize a wide variety of methods. While bootstrapping involves increased personal financial risk for entrepreneurs, the absence of any other stakeholder gives the entrepreneur more freedom to develop the company.

Bootstrapping methods include:

  • Owner financing, including savings, personal loans and credit card debt
  • Working capital management that minimizes accounts receivable
  • Joint utilization, such as reducing overhead by coworking or using independent contractors
  • Increasing accounts payable by delaying payment, or leasing rather than buying equipment
  • Lean manufacturing strategies such as minimizing inventory and lean startup to reduce product development costs
  • Subsidy finance
2.15.4.3 Additional financing

Many businesses need more capital than can be provided by the owners themselves. In this case, a range of options is available including a wide variety of private and public equity, debt and grants. Private equity options include:

  • Startup accelerators
  • Angel investors
  • Venture capital investors
  • Equity crowdfunding
  • Hedge funds

Debt options open to entrepreneurs include:

  • Loans from banks, financial technology companies and economic development organizations
  • Line of credit also from banks and financial technology companies
  • Microcredit also known as microloans
  • Merchant cash advance
  • Revenue-based financing

Grant options open to entrepreneurs include:

  • Equity-free accelerators
  • Business plan/business pitch competitions for college entrepreneurs and others
  • Small Business Innovation Research grants from the U.S. government
2.15.4.4 Effect of taxes

Entrepreneurs are faced with liquidity constraints and often lack the necessary credit needed to borrow large amounts of money to finance their venture. Because of this, many studies have been done on the effects of taxes on entrepreneurs. The studies fall into two camps: the first camp finds that taxes help and the second argues that taxes hurt entrepreneurship.

Cesaire Assah Meh found that corporate taxes create an incentive to become an entrepreneur to avoid double taxation. Donald Bruce and John Deskins found literature suggesting that a higher corporate tax rate may reduce a state's share of entrepreneurs. They also found that states with an inheritance or estate tax tend to have lower entrepreneurship rates when using a tax-based measure. However, another study found that states with a more progressive personal income tax have a higher percentage of sole proprietors in their workforce. Ultimately, many studies find that the effect of taxes on the probability of becoming an entrepreneur is small. Donald Bruce and Mohammed Mohsin found that it would take a 50 percentage point drop in the top tax rate to produce a one percent change in entrepreneurial activity.

 

2.15.5 Predictors of success

Factors that may predict entrepreneurial success include the following:

2.15.5.1 Methods
  • Establishing strategies for the firm, including growth and survival strategies
  • Maintaining the human resources (recruiting and retaining talented employees and executives)
  • Ensuring the availability of required materials (e.g. raw resources used in manufacturing, computer chips, etc.)
  • Ensuring that the firm has one or more unique competitive advantages
  • Ensuring good organizational design, sound governance and organizational coordination
  • Congruency with the culture of the society
2.15.5.2 Market
  • Business-to-business (B2B) or business-to-consumer (B2C) models can be used
  • High growth market
  • Target customers or markets that are untapped or missed by others
2.15.5.3 Industry
  • Growing industry
  • High technology impact on the industry
  • High capital intensity
  • Small average incumbent firm size
2.15.5.4 Team
  • Large, gender-diverse and racially diverse team with a range of talents, rather than an individual entrepreneur
  • Graduate degrees
  • Management experience prior to start-up
  • Work experience in the start-up industry
  • Employed full-time prior to new venture as opposed to unemployed
  • Prior entrepreneurial experience
  • Full-time involvement in the new venture
  • Motivated by a range of goals, not just profit
  • Number and diversity of team members' social ties and breadth of their business networks
2.15.5.5 Company
  • Written business plan
  • Focus on a unified, connected product line or service line
  • Competition based on a dimension other than price (e.g. quality or service)
  • Early, frequent intense and well-targeted marketing
  • Tight financial controls
  • Sufficient start-up and growth capital
  • Corporation model, not sole proprietorship
2.15.5.5 Status
  • Wealth can enable an entrepreneur to cover start-up costs and deal with cash flow challenges
  • Dominant race, ethnicity or gender in a socially stratified culture

 

Select the language of your preference