Informatyka w Firmie Archive

0

BCG Report: Are You Making the Most of Your Relationship with AI?

Management Review suggests that in order to see significant financial returns, organizations need a multidimensional, complex relationship with AI—one that involves several methods of learning and different modes of interaction.

Businesses everywhere are recognizing the power of AI to improve processes, meet customer needs, enter new spaces, and, above all, to gain sustainable competitive advantage. With this recognition has come an increased adoption of—and investment in—AI technologies. A global survey of more than 3,000 executives revealed that more than half of respondents are deploying AI: six out of ten have an AI strategy in 2020, up from four out of ten in 2018. AI solutions are more prolific and easier to deploy than ever before, and companies around the globe are seizing on the opportunity to keep up with this exciting trend. Yet despite their efforts—to hire data scientists, develop algorithms, and optimize processes and decision making—most companies aren’t seeing a significant return on their investments.

So, what allows a small number of companies to stand out from the crowd?

For them, AI isn’t just a path to automation; it’s an integral, strategic component of their businesses. To achieve significant financial benefits, companies must look beyond the initial, albeit fundamental, steps of AI adoption—of having the right data, technology, and talent in place, and organizing these elements around a corporate strategy. Currently, companies have only a 21% chance of achieving significant benefits with these fundamentals alone, though incorporating the ability to iterate on AI solutions with business users nearly doubles the number, to 39%. But it’s the final stage of AI maturity, of successfully orchestrating the macro and micro interactions between humans and machines, that really unlocks value. The ability to learn as an organization—by bringing together human brains and the logic of machines—is what gives companies a 73% chance of reaping the financial benefits of AI implementation.

More: To embrace AI’s full potential, companies must recognize that humans play an equally important role in the equation—and reshape themselves accordingly. Download the Full Report

Authors: Sam Ransbotham, Associate Professor, Boston College/MIT Sloan Management Review; Shervin Khodabandeh, Managing Director & Senior Partner, Los Angeles; David Kiron, Executive Editor, MIT Sloan Management Review’s Big Ideas initiatives; François Candelon, Managing Director & Senior Partner, Global Director of the BCG Henderson Institute
Paris; Michael Chu, Partner and Associate Director, Data Science, Silicon Valley – Bay Area; Burt LaFountain, Managing Director & Partner, Boston

0

BCG Six Steps to Bridge the Responsible AI Gap

As artificial intelligence assumes a more central role in countless aspects of business and society, so has the need for ensuring its responsible use. AI has dramatically improved financial performance, employee experience, and product and service quality for millions of customers and citizens, but it has also inflicted harm. AI systems have offered lower credit card limits to women than men despite similar financial profiles. Digital ads have demonstrated racial bias in housing and mortgage offers. Users have tricked chatbots into making offensive and racist comments. Algorithms have produced inaccurate diagnoses and recommendations for cancer treatments.

To counter such AI fails, companies have recognized the need to develop and operate AI systems that work in the service of good while achieving transformative business impact—thinking beyond barebones algorithmic fairness and bias in order to identify potential second- and third-order effects on safety, privacy, and society at large. These are all elements of what has become known as Responsible AI.

Companies know they need to develop this capability, and many have already created Responsible AI principles to guide their actions. The big challenge lies in execution. Companies often don’t recognize, or know how to bridge, the gulf between principles and tangible actions—what we call crossing the “Responsible AI Gap.” To help cross the divide, we have distilled our learnings from engagements with multiple organizations into six basic steps that companies can follow.

The Upside of Responsible AI

Concern is growing both inside and outside boardrooms about the ethical risks associated with AI systems. A survey conducted by the Center for the Governance of AI at the University of Oxford showed that 82% of respondents believe that AI should be carefully managed. Two-thirds of internet users surveyed by the Brookings Institution feel that companies should have an AI code of ethics and review board.

Much of this concern has arisen from failures of AI systems that have received widespread media attention. Executives have begun to understand the risks that poorly designed AI systems can create—from costly litigation to financial losses. The reputational damage and employee disengagement that result from public AI lapses can have far-reaching effects.

But companies should not view Responsible AI simply as a risk-avoidance mechanism. Doing so misses the upside potential that companies can realize by pursuing it. In addition to representing an authentic and ethical “True North” to guide initiatives, Responsible AI can generate financial rewards that justify the investment.

A Stronger Bottom Line. Companies that practice Responsible AI—and let their clients and users know they do so—have the potential to increase market share and long-term profitability. Responsible AI can be used to build high-performing systems with more reliable and explainable outcomes. When based on the authentic and ethical strengths of an organization, these outcomes help build greater trust, improve customer loyalty, and ultimately boost revenues. Major companies such as Salesforce, Microsoft, and Google have publicized the robust steps they have taken to implement Responsible AI. And for good reason: people weigh ethics three times more heavily than competence when assessing a company’s trustworthiness, according to Edelman research. Lack of trust carries a heavy financial cost. In the US, BCG research shows that companies lost one-third of revenue from affected customers in the year following a data misuse incident.

Brand Differentiation. Increasingly, companies have grown more focused on staying true to their purpose and their foundational principles. And customers are increasingly making choices to do business with companies whose demonstrated values are aligned with their own. Companies that deliver what BCG calls total societal impact (TSI)—the aggregate of their impact on society—boast higher margins and valuations. Organizations must make sure that their AI initiatives are aligned with what they truly value and the positive impact they seek to make through their purpose. The benefit of focusing strictly on compliance pales in comparison with the value gained from strengthening connections to customers and employees in an increasingly competitive business environment.

Improved Recruiting and Retention. Responsible AI helps attract the elite digital talent that is critical to the success of firms worldwide. In the UK, one in six AI workers has quit his or her job rather than having to play a role in the development of potentially harmful products. That’s more than three times the rate of the technology sector as a whole, according to research from Doteveryone. In addition to inspiring the employees who build and deploy AI, implementing AI systems in a responsible manner can also empower workers across the entire organization. For example, Responsible AI can help ensure that AI systems schedule workers in ways that balance employee and company objectives. By building more sustainable schedules, companies will see employee turnover fall, reducing the costs of hiring and training—over $80 billion annually in the US alone.

More: https://www.bcg.com/

By Steven MillsElias Baltassis, Maximiliano Santinelli, Cathy CarlisiSylvain Duranton, and Andrea Gallego

BCG GAMMA is BCG’s global team dedicated to applying artificial intelligence and advanced analytics to business at leading companies and organizations. The team includes 800-plus data scientists and engineers who apply AI and advanced analytics expertise (e.g., machine learning, deep learning, optimization, simulation, text and image analytics) to build solutions that transform business performance. BCG GAMMA’s approach builds value and competitive advantage at the intersection of data science, technology, people, business expertise, processes and ways of working. For more information, please visit our web page.

Authors: Steven Mills, Partner & Associate Director, Data Science, Washington, DC: Elias Baltassis, Partner & Director, Paris; Maximiliano Santinelli, Associate Director, Data Science, Boston; Cathy Carlisi, Managing Director, BrightHouse, Atlanta; Sylvain Duranton, Managing Director & Senior Partner, Global Leader, BCG GAMMA, Paris, Andrea Gallego, Partner & Chief Technology Officer, BCG GAMMA, Boston

0

Business 5.0

Business 5.0 to przyszłość, następny krok w biznesowej rewolucji, nadchodząca zmiana w strukturze firmy, procesach biznesowych i kulturze pracy. Business 4.0 to przeszłość.

Wykorzystanie potencjału – Automatyzacja

Większość firm nie zdaje sobie sprawy z tego, jak dużym potencjałem dysponuje. A nawet jeśli – nie potrafi go  wykorzystać. Przyszłość to automatyzacja (Intelligent Automation – IA), która pozwala na usprawnienie codziennych działań. To wykorzystanie wielu technologii, zaczynając od automatyzacji procesów biznesowych (Robotic Process Automation –  RPA) przez uczenie maszynowe, automatyzację kognitywną oraz wykorzystanie sztucznej inteligencji.

 Rozwiązanie to pozwala na obniżenie kosztów w firmie, wyeliminowanie ryzyka błędów, poprawienie jakości danych  oraz zwiększenie szybkości i jakości obsługi naszych klientów. To także szansa na zaoszczędzenie czasu pracowników, dzięki zastąpieniu ich w rutynowych czynnościach, by ci mogli zająć się zadaniami wymagającymi kreatywności. Automatyzacja umożliwia też oferowanie dodatkowych usług bez zwiększania liczby pracowników. 

·         Wykorzystanie potencjału – Automatyzacja

·         Produktywność i wydajność – Sztuczna Inteligencja

·         Klient przyszłości – zmieniające się potrzeby

·         Wykorzystanie Big Data

·         Przyszłość rynku pracy – ludzie nie roboty

·         Koncepcja Internetu Rzeczy

·         Cyberbezpieczeństwo

Zob np. EY: Usługi doradcze w zakresie inteligentnej automatyzacji

0

The Cyber-Organisation and the New World of Work

The Cyber-Organisation and the New World of Work: Advocating a twin governance and collaborative intelligence solution to overcome a constant disruptive business context

By Mario Raich, Simon L. Dolan, Dave Ulrich and Claudio Cisullo

This paper explores the concept of the cyber-organisation and, in particular, the so-called cyber-enterprise and its functionality in a business context that is constantly generating disruption due to rapid technological advances and a shift in the definition of work. The cyber-enterprise is, and will be, operating in this fast-changing context driven by artificial intelligence. We argue that cyber-reality will change the fundamental roles of all stakeholders, be they employees, suppliers, customers, investors, partners, associations or governmental agencies, and will require corresponding changes in the governing bodies of organisations. Today, we are living in a world in transition and transformation(1). There are three powerful converging megatrends that may explain the shaping of the new world of work: globalisation, digitalisation and creation / destruction. Add to this the rise in cyber-reality, artificial intelligence (AI), global connectivity, as well as hybrid reality, hybrid work and business entity, and, finally, new, disruptive technologies like quantum computing, blockchain, neurotech and robotics, and you will understand that a new form of cyber-organisation is emerging. It is not a luxury; it is a vital necessity in order to survive and sustain business. We propose a new structure of twin boards to deal with this new business environment strategically and operationally.

Beyond contexts related to business, we are also facing global challenges threatening our sheer existence: demographics and global migration; environmental deterioration through global pollution; climate change; asymmetric conflicts and wars. Other contributing factors that are shaping, or will shape, the cyber-enterprise include the emerging new “Intelligent Internet” (including the Internet of Things), combined with machine learning, mobile technology and new technologies encompassing people, artefacts and cyber-entities (CE)1, which is on the way to becoming the first autonomous cyber-entity existing and acting in hybrid reality.

Beyond digital reality, a new, much-more-potent and disruptive revolution is surfacing: cyber-reality (CR). Cyber-reality is a powerful configuration of elements from digital reality, augmented reality and virtual reality. Together with artificial intelligence (AI), it will lead to a far more radical transformation than anything we have seen before. In fact, digitalisation is just one step, albeit a necessary one, in the transition towards virtual reality (VR). The progress of VR is tightly linked to the development of computer technology and artificial intelligence.

More: https://www.europeanbusinessreview.com

0

3 things chief legal officers can do now to become more cyber-savvy

Action #1 Understand the cyber threat environment
The National Council of Information Sharing and Analysis Centers
(ISACs) helps organizations in various industries share information
that can protect their facilities, personnel, and customers from
cyber and physical security threats and other hazards. Members
have access to information and tools to help them mitigate risks
and enhance their cyber resilience.

Action #2 Look into the existing cybersecurity program

Most organizations today have some form of cybersecurity strategy.
While knowing the technical details may be of some value, it can be
more useful for legal executives to understand its scope and, at a high
level, how effectively it addresses cyber risks the organization faces.
In particular, you should be familiar with four areas of the cybersecurity
strategy and the program in which that strategy is executed.
Cyber risk profile
Understand the processes by which cyber risks have been identified
and prioritized for your organization. How often is the profile updated?
How does it account for a quickly evolving threat environment?

Program governance
Assess who across the enterprise is involved in cybersecurity program
oversight. Who sets policies and procedures? What internal controls
are there for compliance? What resources and programs are in place to
predict, detect, and respond to cyber incidents, and how much does
the organization spend on cybersecurity annually? Are the programs
insourced or outsourced? How are employees and business partners
educated and trained about cybersecurity, and how is the effectiveness
of that monitored over time?
Cybersecurity safeguards
Determine what resources, both human and digital, are in place to
defend the organization. How is the cyber perimeter defined? What
security measures protect each type of device and the networks to
which they have access?
Cyber incident response and remediation
Identify existing disaster recovery plans for responding to data
breaches and other cyber incidents and determine if they meet any
applicable industry standards and regulations. If a breach occurs, what
public disclosures and other actions are required? How quickly can the
organization react to shut it down? Do existing plans go far enough not
only in meeting requirements, but also to remediate the issue in such
a way to build additional resilience so it’s not likely to happen again?

Action #3 Apply a legal point of view

With a clearer view of the cyber threat environment and the organization’s program for addressing it, legal executives can look upstream to determine where legal should be involved, both strategically and in discrete activities.
Strategically
Bring a legal perspective to the cyber risk assessment, prioritization, and mitigation process. Have an active voice in how the organization views cyber risk and how key elements of a cybersecurity program address
those risks. As the organization expands its cyber footprint into new geographic areas, stay on top of legal and regulatory implications.
Tactically
As new business initiatives are undertaken (for example, new product development, digital expansion into new markets, thirdparty relationships, and many others), take a seat at the planning table to represent the legal point of view. For example, if an organization allows employees to use company-owned or their own mobile devices for business purposes, review the approach and help establish related parameters for access and usage.
Operationally
Insert legal into the process of monitoring cybersecurity programs. Make sure legal has adequate representation early on in the event of a cyber breach or other incident. Play a more active role in remediation
efforts to help mitigate risk to the organization and prevent similar future
events. To enable more effective strategic, tactical, and operational engagement, consider deeper training in cyber issues for your legal
department or a subset of the department.

MORE: Deloitte Report: Tech Bytes Part 3: Cyber Three things chief legal officers can do now to become more cyber-savvy