EVENT SUMMARY ¦ FourNet: Digital Transformation Summit 2024

Another great event from the FourNet team in Manchester.. I always come away super informed… if not a little concerned… around some of the latest security threats. The landscape continues to evolve… it was interesting to see how AI is helping and also getting embedded into some of the issues too.

CyberSecurity

Key Takeaways

  1. Ransomware remains a top threat: The professionalisation of ransomware, especially from Russian-speaking regions, continues to grow.
  2. Social engineering on the rise: Sophisticated phishing attempts, brute force attacks on Office 365, and insider threats are becoming more frequent.
  3. Managed security is key: Organisations should implement managed services, regular monitoring, and security awareness to protect against evolving threats.
  4. Quantum computing’s potential risks: There is concern about quantum computing’s ability to break current encryption standards in the near future.
  5. AI in cybersecurity: While AI can enhance security operations, it also poses risks, especially through data leakage and exploitation by malicious actors.
  6. Cybersecurity skills shortage: The lack of qualified professionals is a challenge; organisations are urged to focus on recruiting diverse talent and neurodiverse individuals.
  7. Supply chain vulnerabilities: Security risks from suppliers are growing, with the introduction of regulations like NIST 2 making it mandatory to assess and protect against these risks.
  8. Cybersecurity awareness: Creating a culture of security awareness at all organisational levels is critical in mitigating threats.
  9. Strategic partnerships: Collaboration with suppliers, government bodies, and other stakeholders is essential for maintaining a secure environment.
  10. Budget priorities for cybersecurity: Companies must justify and allocate resources effectively to cybersecurity, focusing on risk management and return on investment.
  11. Defence-in-depth approach: Multiple layers of security, including perimeter defences and human-centric processes, should be a priority.
  12. AI amplifying threats: The use of AI by cybercriminals is increasing, making attacks more efficient and targeted.
  13. Public-private partnerships: Essential for addressing national security challenges, particularly in cybersecurity.
  14. Increased role of private sector: Companies like Google and Microsoft now have a greater influence on security than many national intelligence agencies.
  15. Security as a business imperative: Companies must proactively design security into their systems and operations, not just as an afterthought.
  16. Technological acceleration: Advances in AI, quantum computing, and biosciences will reshape security and economic landscapes.
See also  EVENT SUMMARY ¦ Quantum Tech 2024 - Quantum Computing

Innovation

  • Data-centric diversity strategies: Use of detailed diversity data to drive recruitment and retention initiatives, leading to higher performance.
  • Security by design: Encouraging organisations to embed security considerations from the outset rather than as a reactionary measure.
  • Inclusive leadership practices: Creating channels for employees at all levels to directly access leadership, fostering an open and communicative culture.
  • AI-powered security operations: Leveraging AI to enhance decision-making and streamline cybersecurity operations, enabling faster response to threats.
  • Quantum risk planning: Forward-looking strategies that anticipate quantum computing’s ability to break encryption, encouraging the use of hybrid computing to secure operations.
  • Partnership-driven risk management: Encouraging partnerships with academic institutions and industry peers to address skills shortages and improve security outcomes.

Key Discussion Points

  1. The professionalisation of ransomware: Cybercrime has become an organised industry, with services available to help criminals negotiate ransom payments.
  2. Focus on security basics: Leaders should ensure they are doing the basics well, such as regular system updates, backups, and user education.
  3. AI for good and bad: AI is used for both enhancing cybersecurity operations and amplifying cyber-attacks, highlighting the need for cautious adoption.
  4. Cybersecurity as everyone’s responsibility: Organisations must embed a security-first mindset across all departments, not just IT.
  5. Leveraging NIST framework: The NIST cybersecurity framework provides a strong basis for identifying and mitigating risks, especially for board-level communication.
  6. Supply chain risk: As organisations rely on interconnected supply chains, ensuring that suppliers meet security standards is crucial.
  7. Human psychology in cybersecurity: Awareness training should focus on changing behaviour, making cybersecurity relatable for all employees.
  8. Diverse recruitment in SOC teams: Hiring for mindset and aptitude, rather than just qualifications, helps build stronger, more effective security teams.
  9. AI-driven personalisation of attacks: AI is increasingly used to customise phishing attacks, making them more convincing and harder to detect.
  10. Cybersecurity budget allocation: Organisations should prioritise cybersecurity spending relative to the risk, ensuring that investments align with actual threat levels.
  11. Quantum computing threat: Organisations need to prepare for the possibility of quantum computers breaking encryption in the future.
  12. Regulation driving accountability: NIST 2 regulation requires boards to take direct responsibility for cybersecurity, with penalties for non-compliance.

AI Development

Key Takeaways

  1. Generative AI as a transformative force: AI is not about replacing human jobs but augmenting human capabilities, enhancing productivity and creativity.
  2. Importance of foundational models: Large language models like GPT are reshaping industries by enabling multi-purpose tasks beyond narrow AI applications.
  3. Future of work with AI: AI will reshape jobs, not eliminate them, creating new roles and requiring employees to learn how to work with AI tools effectively.
  4. AI democratises access to advanced capabilities: AI allows non-technical users to perform complex analyses and tasks, democratizing data and insights across organisations.
  5. Synthetic reality risks: Deepfake technology and AI-generated content pose significant risks to businesses and politics, as it’s becoming harder to differentiate between real and fake content.
  6. AI-driven customer engagement: AI’s role in customer service will focus on enhancing interactions, but human empathy and creativity will remain critical.
  7. Ethical AI use: Businesses must implement strong AI governance, ensuring ethical use, auditing systems, and keeping a human in the loop for important decisions.
  8. AI in fraud and disinformation: The increase in deepfakes and AI-generated disinformation presents security and ethical challenges, especially for financial fraud and corporate espionage.
  9. Importance of clean data and infrastructure: Success in AI adoption hinges on having clean, accessible data, relevant skills, and secure infrastructure.
  10. Human skills critical: Human empathy, creativity, and customer interaction remain essential and cannot be replaced by AI.
  11. Piloting AI projects: Organisations are advised to start with small, manageable AI pilots to create momentum and success stories before scaling.
  12. Job satisfaction from AI: AI helps by taking over repetitive tasks, allowing employees to focus on more meaningful and fulfilling work.
See also  EVENT SUMMARY ¦ NPL Global 2024

Innovation

  • AI for internal process improvements: Organisations are increasingly using AI to streamline internal workflows, such as automating inquiries and improving access to internal resources.
  • Text to image and deepfakes: AI can create hyper-realistic images and videos, raising both opportunities for creative content and risks related to misinformation.
  • AI for customer service: AI tools like Twilio are being implemented to enhance customer service, reducing bureaucratic hurdles and improving user experience.

Key Statistics

  • Private investment in AI: Between 2019 and 2023, private investment in generative AI rose sharply, showing the growing interest and potential of this technology.
  • 10x growth in deepfakes: Over the past few years, the number of deepfakes has increased by 10 times, posing risks to businesses and politics.
  • Low automation potential for clerical roles: While some jobs have exposure to automation, the real potential for full automation is much lower, ranging from 2% to 25%.

Key Discussion Points

  1. AI strategy alignment: AI strategies should align with overall business strategies, with targeted use cases to maximise impact.
  2. AI and creativity: Despite AI’s growing influence, human creativity and empathy remain irreplaceable.
  3. Risks of synthetic reality: The challenge of distinguishing real from AI-generated content (deepfakes) is a significant risk for organisations.
  4. Future job roles: AI will change, not eliminate, jobs, with roles focusing more on managing AI and applying it to enhance productivity.
  5. Ethics and AI governance: Ensuring ethical use of AI through governance, regular audits, and human oversight is critical to avoid potential misuse.
  6. AI democratization: AI tools are making advanced capabilities accessible to non-experts, helping to break down barriers to data and analytics.
  7. Economic impact of AI: AI has the potential to add significant value to the global economy, but disparities between the Global North and South need to be addressed.
  8. AI in customer service: AI can assist in automating customer inquiries but human interaction will continue to be vital, particularly in complex or sensitive matters.
  9. Fraud risk with AI: The growing use of AI for fraudulent activities, including financial fraud via deepfakes, presents a significant risk.
  10. Focus on skills: As AI reshapes the workplace, organisations need to invest in training and skills development to ensure their workforce can work effectively with AI.
  11. Small-scale AI pilots: Organisations should pilot AI projects to create early success stories and build confidence before scaling.
  12. Data management: Clean, accessible data is essential for AI success; without it, AI initiatives are unlikely to deliver their full potential.
See also  EVENT SUMMARY ¦ Credit Connect: Collections Technology Think Tank 5.1

RO-AR insider newsletter

Receive notifications of new RO-AR content notifications: Also subscribe here - unsubscribe anytime