Browse jobs Find the right job type for you Explore how we help job seekers Finance and accounting Technology and IT Risk and compliance Digital, marketing and creative Administrative and office support Legal Human resources Contract talent Permanent talent Learn how we work with you Executive search Finance and accounting Technology and IT Risk and compliance Digital, marketing and creative Administrative and office support Legal Human resources Technology Risk, audit and compliance Finance Digital, marketing and customer experience Legal Operations Human resources Salary Guide Jobs Confidence Index Press room Salary and hiring trends Future of work Flexible working Work-life balance Diversity, equity and inclusion Browse jobs Find your next hire Our locations
Generative artificial intelligence is fast creating new jobs and impacting existing ones. Sebastian Mayer and Kentaro Ellert from Protiviti, and Christian Schmitz from Robert Half, discuss five roles created and influenced by AI. In 2023, generative artificial intelligence has opened new doors. Large language models, including OpenAI’s ChatGPT and Google’s Bard, can help to solve problems, answer questions, and make predictions. They can draft reports, interpret pictures, and analyse data. Their processing and learning power are helping to unleash new business models. But a word of caution: generative AI is only as good as the data it holds. So, the quality and ethics of that data really matters, especially when it impacts real-world decisions. In this article, we explore five roles created and influenced by AI. From the technical skills of model developers, prompt engineers and data scientists, generative AI will have a profound impact on IT professionals. But it will also influence compliance, audit and risk teams. In the future, both technical and ethical expertise will be pivotal.
Model developers are the technical experts making AI solutions. For example, they might develop a tool to help invoice processing. We have also seen AI trained to assess insurance claims using pictures of damaged cars and satellite maps of hurricane damage. The list of potential use cases is long. Model developers will have programming skills and experience of developing statistical models.
Prompt engineers are the translators between AI and humans; they communicate what they want, or expect, AI to do. If a business wants to develop a whitepaper using ChatGPT, for example, a one-sentence prompt won’t be enough; asking for a structure, five key points, and specifying the length of each summary, will get better results. We believe prompt engineers will come from an IT background because they need to understand how these models work. There is no university degree in prompt engineering, so for the next five years we expect them to come from data science and computer science courses.
Data scientists are the bridge between raw data and the intelligence it creates. They ensure that new tools are built using good, clean data; they effectively construct the foundation on which model developers can work. In the past, data scientists would ensure the right data was in the right place; now they are considering data bias, too. In the future, good data will need to be ethical data, and the role of the data scientist will expand in this direction.
Ethics officers will ensure the use of AI, and its impact on employees and clients, is fair and balanced. For example, if a business wants to develop an AI tool for recruitment, but the historical data available leans towards white men between the ages of 30 and 40, there will be bias in the new tool. Bias will impact the chances of a fair and ethical outcome. Ethics officers will come from a compliance background, but they will naturally develop technical skills along the way.
Audit and risk will change because of AI. In the past, for example, a software auditor would follow a series of well-established steps. But, with AI, IT auditors will need to understand how algorithms and self-learning technology works. It will be the same for other areas of the profession. What happens if AI develops intellectual property? Whose is it? How will AI influence cyber security and what risks will it bring? Audit and risk teams will have to understand good and bad data, how models are built, and the ethics of new tools. Here are five roles created and impacted by AI. There will be many more. But the balance of technical and ethical skills in this list shines a light on what matters now. As businesses start using AI to improve their operations, and launch new products, they will also have to consider the impact of their decisions. AI is a powerful tool, but deploying it wisely and fairly, will make it even more powerful in the long term.
Sebastian Mayer is Managing Director at Protiviti with over twelve years of experience in IT consulting, information security, IT audit and SAP & ServiceNow Advisory.  Kentaro Ellert is Manager for, AI & AI Regulation Expert at Protiviti. He specialises in compliance for Artificial Intelligence and support companies with building holistic AI Governance and AI Compliance Management System Christian Schmitz is Head of Technology Germany at Robert Half. The tech expert has been advising companies across all industries on all aspects of IT and their digitalization programs with a focus on consulting and recruitment, including global market leaders and DAX40 companies.