Author:

Ricardo Moral

Published on:

October 31, 2024

IMG_1210-2

My Key Takeaways From TED AI San Francisco 2024

While visiting clients in San Francisco, I had the opportunity to attend TED AI, which presented a range of thought-provoking insights. The event showcased a fascinating array of talks, many of which highlighted contrasting perspectives across key areas.

Below, I’ve summarised these contrasts, including the primary arguments presented, and my initial reflections.


AI as a Tool to Maximise Productivity vs. Human Replacement

A recurring theme in many presentations was AI’s potential to enhance human capabilities by serving as a powerful tool. Most speakers highlighted AI’s role in amplifying human performance, enabling individuals and teams to achieve more by leveraging AI as a supportive asset.

However, a notable exception came from Stephanie Zhan of Sequoia, who presented a contrasting vision. Sequoia’s focus is on developing AI applications that could eventually lead to complete automation in certain domains, from robotics and autonomous agents to even AI-driven software creation. This approach envisions AI not as a complement to human effort, but as a full replacement in some tasks.

AI platforms, despite their advances, are still very far from fully replacing humans in an incredibly broad number of activities, mainly due to limited reasoning abilities in language models. Apple’s latest white paper highlights these gaps, showing that even the most advanced systems struggle with the nuanced, context-based understanding humans naturally possess. This underscores the ongoing need for human insight in complex applications. For more details, see: Apple White Paper.

My view is that the reality will likely be more nuanced. Full-scale AI autonomy, including agents with broad responsibilities, remains a very distant goal. Even if we reach such capabilities, new and unforeseen challenges will continue to emerge, requiring human insight and adaptability. For now, the most immediate and valuable opportunity lies in using AI as a tool to elevate human productivity. This approach allows us to generate tangible benefits today while deepening our understanding of AI’s long-term potential and trajectory.


LLMs | Open Source vs. Proprietary

Joelle Pineau, Meta’s Vice President of AI Research, presented Meta’s perspective on the importance of embracing open-source frameworks in developing and publishing AI models.

Meta’s decision to make Llama open-source is based on a belief that open ecosystems accelerate technological progress while fostering secure, affordable AI. By allowing developers to modify, control, and deploy AI models independently, Meta aims to tackle concerns over data privacy, operational costs, and reliance on proprietary ecosystems.

With a business model that doesn’t depend on selling AI access, Meta faces less financial risk by open-sourcing Llama than competitors invested in closed models. Meta envisions Llama becoming an industry standard, aligning with its goal of avoiding dependency on other tech giants that may impose limitations on AI development.

Open-source AI is also positioned by Meta as a driver of global economic opportunity and innovation. By making advanced AI accessible to startups, researchers, and organisations worldwide, Meta seeks to democratise AI, countering the concentrated power of closed models in big tech. This approach could lead to a more decentralised, collaborative AI landscape, with Llama evolving openly and continuously.

My view is that in this field, each company’s strategy is shaped by its commercial goals and business model, often articulated in a way that serves its interests. This tension between open and closed models will likely result in robust offerings on both sides, each with distinct advantages. As users, it will be essential to understand the diverse landscape of available large language models (LLMs) and assess the specific strengths and limitations of each.

At Parser, our focus is on developing expertise across the leading AI models to provide clients with tailored recommendations suited to their unique needs.


Training Models & IP Protection

Throughout the talks, we saw contrasting perspectives on intellectual property (IP) and content usage for training AI models.

One side argues that using licensed content without authorisation to train AI models is a violation of IP rights. Ed Newton-Rex, founder of Fairly Trained—a group advocating for responsible data use in AI training—presented a compelling case supporting this position, emphasising the ethical and legal implications of unauthorised content use.

Fairly Trained is leading a campaign advocating for IP rights in AI training, urging support for a statement that reads: “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.” The campaign highlights the potential impact of unauthorised content usage on creators’ income and aims to ensure fair compensation and control over how creative works are used in AI model training. More details can be found at aitrainingstatement.org.

This stance is upheld by various professionals and organisations, including News Corp and the Recording Industry Association of America, both of which have pursued legal action against AI companies for using copyrighted materials in model training. 

In contrast, Angela Dunning, a litigation partner in the Silicon Valley office of Cleary Gottlieb, a leading international law firm, offered an alternative view, sharing how AI companies are arguing that they are technically within legal bounds by relying on the principle of Fair Use. This perspective suggests that training models under Fair Use don’t fundamentally differ from traditional data usage practices, as it allows AI systems to learn from publicly available data without infringing on IP rights.

Training AI models can fall under this umbrella if it involves transformative uses, meaning the AI uses the content to generate insights or patterns rather than replicating the original material exactly.

I believe that practical common sense will guide how IP protection evolves in AI. Broadly, we protect tangible works like songs, books, source code, and photos—specific creations with identifiable form. However, ideas, concepts, or general knowledge aren’t typically protected, as they lack a concrete, unique form.

As the debate over IP protection in AI training evolves, it’s clear that the AI industry will need to find a balanced approach that respects creators’ rights while fostering innovation. The contrasting perspectives presented at TED AI highlight a central question: should the focus be on regulating training data or on managing model outputs? While some, like Fairly Trained, advocate for more stringent controls over training data to protect creators’ livelihoods, others suggest that Fair Use offers a viable path for AI companies to train models responsibly on public content.

The path forward will likely involve compromise and innovation on both sides. Future legal frameworks may combine elements of both views—encouraging companies to licence data responsibly while developing robust mechanisms to ensure outputs don’t infringe on existing copyrights. As models become more advanced, tracking and accountability mechanisms within AI itself could provide a solution, enabling creators, users, and companies to coexist in a landscape that respects both creative rights and technological progress.

Ultimately, achieving a practical balance will be essential as we navigate the intersection of IP rights and AI’s potential. For now, organisations, creators, and policymakers will need to stay engaged, shaping an approach that aligns with both ethical standards and the realities of AI’s rapid evolution.


AI Agents

Across the two-days, it was evident that everyone recognises the transformative potential of AI agents in reshaping how we live and work. These agents promise to unlock unprecedented productivity gains and enable the creation of new products and services.

However, achieving these ambitious goals isn’t without its challenges. At Parser, we’ve been working on AI projects and building agents for some time now, quickly realising the importance of building solid capabilities in this space to help others succeed.


Here are some of our key insights:

  • Theoretical Foundations: A thorough grasp of transformers, embeddings, attention mechanisms, and unsupervised learning has been critical to realising the true potential of generative AI.
  • Self-Hosting Complexity: While self-hosting open-source LLMs gave us flexibility, it also highlighted the high complexity and cost, especially when it came to infrastructure management and context maintenance.
  • Scaling Challenges: Transitioning from proof-of-concept to scalable solutions exposed challenges, particularly around context window limitations. We have been experimenting with chunking and retrieval techniques to manage these.
  • UI and Usability Matters: Building a custom user interface was essential for seamless model interaction, as third-party platforms like Slack posed limitations in streaming, rendering, and usability.
  • Cost-Efficiency of Cloud Solutions: Moving to cloud-based LLMs like Bedrock and OpenAI has reduced operational overhead and simplified compliance, proving highly cost-effective for ongoing projects.

These insights are helping us and our clients build robust, efficient AI solutions capable of driving meaningful impact.

I thoroughly enjoyed the talks at TED AI. The diverse views and perspectives shared by speakers highlighted just how rapidly the field of AI is evolving, bringing with it challenges and opportunities across various domains. It’s inspiring to see how different industries are grappling with AI’s potential and implications. 

At Parser, we are focused on guiding our clients through this dynamic landscape, helping them navigate the complexities and seize the benefits of these advancements in AI.

Scroll to Top