Tag Archives: AI

The issue of personalisation and its impact on KM

Author Hal Kirkwood BIR Board member, Bodleian Business Librarian, Said Business School, University of Oxford. President, Special Lib Assoc. 2019

The current state of affairs was on full display at the last (November 2018) KM World Conference in Washington DC.  I had the opportunity to attend for several days to see first-hand what is happening in the knowledge management realm.  There were many themes prevalent throughout the conference; each day consisted of 3 tracks. The Day One tracks focused on KM & Culture, Digital Workspaces, KM Tools & Tech.  Day Two tracks focused on Knowledge-Sharing Processes, Content Management, and KM Culture & Collaboration.  Key takeaways and themes were the importance of collaboration; identifying the right tools to fit the problem and your organization’s culture; designing environments, both physical and virtual, for employees and clients; determining how to transfer knowledge; developing information ecosystems; and the implementation and impact of artificial intelligence and machine learning.  The clear underlying theme is the continuing intersection of people and technology.

One aspect that is gaining traction into KM is personalization; utilizing individual user data to provide a more focused recommendation or timely suggestion. Technology, in conjunction with access to massive amounts of data, is driving momentum towards ever greater personalization.  Personalization, not customization.  Consumers become weary of making choices when these systems can make relevant choices for them based on their prior experiences.  Consumers are showing preferences towards companies that provide effective, relevant personalization.  However, since knowledge management focuses on the internal management of a company’s knowledge personalization at the employee level has been slower to develop.

Personalization has primarily been within the purview of marketing and consumer buying habits.  The power of personalization relies on a combination of data that was once inaccessible; namely geolocation crossed with purchasing habits.  It has become especially powerful when the immediacy of time is included to deliver personalized information and recommendations to a potential customer at the most optimal moment to affect their behavior.  Artificial intelligence and machine learning will make significant inroads in the personalization strategies of companies marketing plans to provide more focused experiences for customers.  1

The challenge for many companies is to scale this personalization to the masses.  AI and machine learning will increase the capacity to track multiple data points for larger numbers of customers. This will increase the expectation of customers for improving levels of service that meet their exact needs and requirements.  Evidence shows that it is highly successful when implemented in increasing sales and customer satisfaction. but that most companies are not implementing it.

Every company is now looking for ways to gather customer data that can be used to make more informed, and more specific, decisions on individuals.  Many companies are also capturing terabytes of data on customer behavior to then sell to businesses for this very reason. There is the issue however, that the attempt at personalization will be wrong based on the AI processing poor or inaccurate information.  As personalization becomes more accurate, and more ubiquitous, it will seem all the more glaring when AI-driven personalization is incorrect. Consumers are likely to feel more uncomfortable about what data is ‘out there’ on them and its accuracy, or lack thereof.  This is a complicated issue of human perception of technologically driven services.  How much control we have over all of this data is also a major concern.  In Europe, GDPR is beginning to make an impact by providing consumers with more control over what data is collected and how it is used.  It remains to be seen how exactly this will impact the data collection and utilization process. Many consumers, when surveyed, approve of the use of their data if they will receive a tangible benefit. There are some conversations taking place about implementing some form of GDPR in the United States, but little in the way of concrete details have provided.

Companies such as Netflix, Spotify, Amazon, and several other key companies are pursuing, and leading, the development of even greater data collection to develop ever more enhanced services for individuals.  Areas like physical fitness, healthcare, and personal finance are becoming driven by apps that collect personal data to then provide recommendations relevant to an individual’s life.  Consumers will allow themselves to be tracked in this way because of the return on investment of their personal data.

The majority of personalization development has been in the B2C marketplace; there will likely be increased demand for it on the B2B side.  The key element will be systems that collect client-level data that can be assessed by AI applications.  Many companies are moving into this to deliver solutions for collecting and analyzing data.  Business intelligence systems will develop as AI and machine learning are layered into them for much greater personalization of services and deliveries to corporate clientele. Companies must make the choice to implement an AI-based system to drive their decisions.  Not an easy task when it often requires a significant operational and cultural shift in how they conduct business.  Companies making this decision are likely to benefit but must be wary of the myriad pitfalls.  What ramifications this will have on the competitiveness of companies and markets, as well as within the broader business information environment still remains to be seen.

Emerging technology and content buying

Author: Penny Leach, Associate Director, EBRD, and BIR Editorial Board Member

Please note this post contains the personal views of the author and are not connected with her employer.

Emerging technology and innovation are impacting content buying – and selling – in multiple ways.  This was the conclusion of a lively session held at the SLA Conference this year.  The situationis evolving rapidly, with varying levels of appetite and capacity to optimise the exciting opportunities.  As is so often the case, collaboration between multiple parties is more likely to lead to success, makingthe most of harnessing data in ways thatfree human intelligence for more value-add activity, and create appropriate commercial models.  However there are challenges and concerns – the fear of unknown costs, of loss of control over proprietary content,of missing out (and being disenfranchised) due to a lack of knowledge or resource and appropriate infrastructure, raising both private and public sector concerns.

The SLA Conference this year was held in Baltimore in June.  The Conference is a great way to meet other information professionals and other members of the information community from across the globe and build better connections in person.   Every year the SLA Leadership & Management Division’s Content Buying Section brings together an experienced panel representing different approaches in thecommunity of content of vendors and buyers, to provide reality-based insight.   This year the panellists were Amy Davis, Senior External Content Advisor at EY; Tim Baker, Global Head of Innovation at Thomson Reuters (now Refinitiv); and Bill Noorlander, Director of BST America (Conference sponsor).

The panel focussed on four emerging technologies that are creating content and new ways of deriving value from content: the Internet of Things (IoT); Data Analytics; Artificial Intelligence (AI) and Robotics Process Automation (RPA).  Early on, the largely buyside audience was reminded that content is not normally for sale but rather is leased for specific purposes – hence the complex contractual terms that are needed to protect all parties (content creator, provider and user).

Several themes emerged from the discussion,and from audience questions during the interactive session.  Generally the new content and technologiesare seen to enable several kinds of ‘smarter’, such as better client experience when deploying more visual and user-friendly products, more machine-ready data that customers can use in their own apps, and more efficient companies using their own data effectively to reduce cost (automated processes) and add value (e.g. finding more content to enrich products).

There is increasing usage of sensor-based devices in personal, industrial and civic applications (IoT).  This is creating new and extremely high volume data streams to add to the fast-growing mass of structured and unstructured data that isalready part of our digitised world.  This data ‘exhaust’, as a by-product of core businesses, offers opportunities for monetisation – for example in the financial sector– but with caveats that (as ever) mean ‘free’ is not really the case.  These alternative data sets are messy, fragmented, lack standardisation and history, and are hard both to use effectively (signals can be weak),  and to price.   For vendors, it is costly to develop and maintain new commercial offerings where client needs might be very specific.  There are hurdles, too, around data privacy and ownership, and legal terms such as the definition of users.  ‘Bots’ for example, one of the tools created by AI and an example of RPA that can free humans from repetitive tasks, may be prohibited by legacy contracts.   And just how do you count ‘eyeballs’ and fingertips?

On the buy side, the panellists concurred that it is better if multiple stakeholders are at the table – information professionals familiar with content licensing and the concept of reference interviews to articulate data needs, IT, procurement, legal advisors, and of course, the business process owners – to determine the requirement, negotiate new or amended license rights, match price to available budgets, and finally but not least, implement the new tools.

New players are emerging- new intermediary service companies such as data  ‘wranglers’ as data science and analytics skills (e.g., Quandl)  and new roles such as Chief Data Officers (CDO). More tools are needed to commoditise processes to reduce development costs and to deal with challenges.  Blockchain for example may help with the tracking of data elements.  As ever, watch this space!

On hollowing out….

Author: Stephen Phillips, Executive Director Morgan Stanley and BIS Editorial Board Member

Please note this post contains the personal views of the author and are not connected with his employer.

Earlier this year Stephen Dale wrote a fascinating article on corporate memory for the May edition: “Are we destined to forget everything we already know”.  As I reflected on his narrative, I felt the need to explore this topic further, as organisations appear to have become “hollowed out” as they focus on cost to deliver short-term efficiency and opportunity.

I also felt the need to re-interpret some of the terminology used to define information, knowledge and memory.  The vocabulary for these concepts has become interchangeable in many organisations as they continue to search for increasingly challenging opportunities to realise further benefits from managing this space.

A quick search on Google (I know!) reveals the first definition of knowledge to be facts, information, and skills acquired through experience or education; the theoretical or practical understanding of a subject”. Nothing contentious there, but the second definition cites it as “information held on a computer system”. The latter was a new one to me; since when did knowledge become defined as information held on computer systems?

Another interpretation rang more true to me: “awareness or familiarity gained by experience of a fact or situation”.  To my mind, this speaks to the human nature of knowledge – it is much more than facts and information; it is about awareness, familiarity, experience, consciousness, perception and appreciation.  All nouns that reflect human nature and remain technological aspirations; at least for the time being.

Whilst it is important to recognise and appreciate the capabilities of the latest developments in AI, machine learning and neural processing, it is more important to recognise their limitations and appreciate the benefits associated with tenured people and their accumulated know how in their respective roles.

The most impactful force in the resizing of the business information industry has been the empowerment of “knowledge workers” to do their own information seeking.  However, investment in these workers and their information skills has lagged behind, leaving a workforce that know which buttons to press but who are poorly informed about what underpins the information and technologies they use every day.

Redundancies, outsourcing or offshoring of business information specialists compounds the issues.  New entrants that come into the industry find it difficult to secure positions with their limited experience which is incompatible with the expectation to operate at a level without the benefit of strong foundations of basic, practical information handling experience.

Meanwhile, the “new knowledge workers” increasingly rely on technology not just to bestow them with the facts and information they need but also to skilfully manipulate it into a finished product.

Does it matter?

What happens when the technology fails?  Who has the knowhow or experience to check the product is accurate and is as expected? What happens if it fails the quality check?  Who figures out what went wrong?

Technology is a wonderful thing; I really do love many new technologies.  Organisations are recognising the value of people and particularly those with tenure and the depth of understanding they bring to the business; but we cannot be complacent.  When the technology fails, there is growing dissatisfaction with the lacklustre quality of services; when a problem arises, it requires depth of knowledge and experience to fill the gap.

A number of professional services organisation have begun re-aligning their KM work with Talent Development.  Recognising that knowledge and knowhow are part of the intellectual capital of the organisation.  Acknowledging that experiential learning associated with employment is something to nurture and pass from person to person, not programmed into a machine and regurgitated ad infinitum.  This is especially the case when these standardised routines appear at odds with the need to differentiate an offer by building bespoke solutions to meet specific needs and expectations.

I remain optimistic that our industry will respond and reposition in light of continuing advances.  Unfortunately, this is only one part of the equation.  If we are to thrive, we must continually demonstrate our value to convince our leaders that we have a place in the future of our respective organisations.

2018 Annual Survey – Theme 2

In theme 2 we look at aspects of data and how the information professional can and should create an impact in this area.  I was recently reading on the Information Today blog a piece on how academic librarians in particular can take on the research and management of data.  It is an interesting piece by Andrew Cox and examines the links between data management and skills needed in a professional librarian role today.  He looks at how the importance of big data has grown from being just the level below information on the knowledge pyramid to the top consideration in enabling organisations to operate, grow and compete on the world stage.  He considers how this has come about through the effects of the rise of big data and the concerns it has raised along with the abilities it has given us to gain greater knowledge and understanding of the world around us.  Read the full article here https://www.infotoday.eu/Articles/Editorial/Featured-Articles/Academic-librarianship-as-a-data-profession-125376.aspx

 

Data governance, literacy and quality are all big featured concerns in this year’s survey.  We have seen and discussed the quality of data and information throughout the year with the rise in fake news being published not always deliberately but sometimes with a mis-understanding and mis-use of the underlying data which at the very best has led to a mis-interpretation of the data.   Also, in the news has been the reported detrimental effects of utilising machines for analysis of large data sets without the relevant context for interpretation.  So, whilst it has been feared in some circles that the rise of big data and machine search and analysis would adversely impact on jobs and employment, it turns out that library and information professionals have never been more needed in order to check the analysis and add valuable context to the data to ensure a true interpretation.

 

Understanding data, how to search for it and teaching others how to check the quality of the data they are gathering is now considered a key skill across all sectors.  Managing that data internally, creating appropriate policies to ensure that the data is not kept beyond its life span is equally important particularly with new international policies such as General Data Protection Regulation (GDPR) coming into force.  Compliance with data regulations has taken a rise to the forefront as general public in particular have become more and more aware of data, its use and importance.  We have discussed both in the Journal and blog posts on how data has been used and mis-used to manipulate or influence situations including the impact on the American Presidential Campaign.  Information professionals have the specialist knowledge and skills to support organisations in this area ensuring the correct management of internal data, research of external data and interpretation of large data sets.

 

As specialists in this profession library and information professionals are also of great value in ensuring the ethical use of data to gain information and intelligence.  We have all read about the Cambridge Analytica and Facebook scandal, there have also been reports about other big players including YouTube allegedly collecting and improperly using children’s data.  Any news item about the potential mis-use of data can have a lasting detrimental impact on both organisations and individuals involved.  The importance of the ethical use of data is seen in the new framework guidelines on procuring data analytics that the UK Government has produced for civil servants.  The Data Ethics Framework (https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework#the-data-ethics-workbook) highlights the focus the Government has on ensuring that the data they collect and use is done so appropriately and ethically.  There is an interesting article and commentary on the Governments data plans by Rebecca Hill in The Register here https://www.theregister.co.uk/2018/06/14/data_ethics_centre_framework_government_ai_announcements/

 

Look out for what our research has uncovered specifically on these aspects of data and data management for information departments across industry sectors in September’s Business Information Review.

First issue of 2018 now out online

Our March issue contains a number of papers with the general theme of looking at the effects of technology on information and knowledge management. Hal Kirkwood returns to look at how artificial intelligence (AI) is affecting information professionals and their job roles. Delphine Phillips and Mark West from Integreon look at the future of Business Information Services (BIS) within the financial services sector and the effects of technology and other internal and external environmental factors in that area. We also see a contribution from Gabriela Labres Mallmann, a PhD student at the School of Management, UFRGS, considering the influences of Shadow IT on knowledge sharing. Here is a short overview of each of the papers in this issue.

  • The Current State of Artificial Intelligence and the Information Profession: Or Do Librarian Droids Dream of Electric Books? – Prof Hal P Kirkwood, Purdue University.

Hal begins by observing that while there has been an increasing interest in AI in the last 12 months, there has been 100% increase in the use of the terms AI and librarians. AI as a technology is fast moving from science fiction to reality with the rising popularity of voice-activated tools such as Siri to the developing use of self-driving cars and even a self-operating grocery store! His article, unlike others, is not a review of the good and bad sides of using AI, but about considering how the technology is developed and its psychological impacts. A lot goes into the development of the technology, it is not created as ‘all knowing’. It requires a lot of human interaction and consideration to develop the algorithms, providing ‘good’ and ‘relevant’ information and data to the AI tool in order for it to provide an effective service. It still also requires ‘policing’ to ensure that information it provides is accurate and relevant which still requires human interaction. His article also reviews what is being done around the world to consider the impact of AI and ensuring that it is used for the greater good rather than creating a negative impact on people and society at large.

  • Exploring the Future of Business Information Services in the Financial Sector – Delphine Phillips, Knowledge Solutions Manager, Integreon, and Mark West, Operations Director, Knowledge and BIS, Integreon.

Delphine and Mark have conducted a highly interesting research study on the role of BIS within financial services and its future in light of changing internal and external environmental factors. Their research is gathered from global investment banks and equity houses and considers the role technology is playing in the development of the BIS of the future. They review different operating models, how these are affected by internal and external changes and look at future drivers and future scope developments. They also consider the influence of knowledge management services on BIS, how they link and interact.

  • The Influence of Shadow IT Usage on Knowledge Sharing: An Exploratory Study with IT Users – Gabriela Labres Mallmann, PhD student at the School of Management, UFRGS.

Gabriela presents a new look at knowledge sharing from the point of view of ‘Shadow IT’ (software and hardware not authorized by IT departments) and its effects on knowledge sharing. The research is gathered from a series of interviews with IT users looking at how they share knowledge and information, why they share it in this way and considerations for managing risk for the future.

  • Knowledge Management Process Arrangements and Their Impact on Innovation – Eduardo Kunzel Teixeira and Mirian Oliveira of PUCRS, Rio Grande do Sul, Brazil, and Carla Maria Marques Curado of ISEG-UL, Lisboa, Portugal.

Moving away from technology and focusing more on process, this paper discusses the impact knowledge management process (KMP) has on facilitating innovation. The authors look at how different processes and different combinations of processes can affect innovation. Their conclusions, overviews in the abstract, provide a good taster of the paper itself –

1) it was identified that in general the companies apply balanced KMP arrangements;

2) that the same innovation results can be achieved using different KMP compositions; and

3) that KMP investments tend to reach a maximum effect, beyond which innovation decelerates.

  • Out of the Box – Virtual Realities in the Business World

Luke Tredinnick reviews the emergence and current uses of virtual reality technology and considers how it can impact our world. Will it become just another passing fad like 3D television or is it set to be one of the next disruptive technologies on the horizon?

  • Perspectives

Martin White returns with a review of the latest papers across Sage which could be of interest to you. Highlighted is a paper on the importance of being allowed to make mistakes in order to develop knowledge and innovate. Martin draws from his own background to illustrate the importance of this in the work environment.

Other subjects covered include the use of language and the ability to analyse and use it to consider cultural fit within an organization; considerations for HR and prepping the workplace as the amount of knowledge-led work increases with the working environment becoming more and more complex; AI and human interaction and the development of shared mental models to facilitate future developments; a discussion on the impact of libraries’ ISO standard; and the importance of user interfaces and display of search results in a meaningful way to improve findability. Luke Tredinnick and Claire Laybats

See more online here http://journals.sagepub.com/doi/full/10.1177/0266382118762967