Generative AI for Asset Managers Workshop

Sept 30 - Oct 1

2 day, online workshop


8:00 am - 11:00 am ET

Hosted By

Dr. Chan & Dr. Hunter


Email us:

Generative AI for Asset Managers is a 2-day online workshop to demonstrate how we construct a discretionary trading strategy using an LLM.

The difference between algorithmic and discretionary traders is that the latter use qualitative, unstructured data as input to their brains, form high level understanding and intuition, and make trading decisions. With the advent of Generative AI and Large Language Models (LLMs), we can now systematize and deploy at scale discretionary trading strategies – essentially creating “George Soros on a chip”.

During the workshop, we will demonstrate how asset managers and traders can use Google’s BARD to turn unstructured data such as the audio feed of the Federal Reserve’s Chair’s speech into high frequency trading signals and backtest such strategies, all at minimal cost. We will then break into small groups to explore and experiment with variations and improvements on the basic code, as well as brainstorm other use cases of LLM for asset management.

Workshop Speakers

This workshop will be hosted by Dr. Ernest Chan, Founder and CEO of, and Dr. Roger Hunter, Chief Technology Officer at QTS Capital Management, and Dr. Hamlet Medina, Chief Data Scientist at Criteo. We are honoured to be joined by Dr. Lisa Huang, Head of AI Investment Management at Fidelity Investments who will present as a keynote speaker. 


Dr. Ernest Chan

Founder and CEO of


Dr. Roger Hunter

CTO of QTS Capital Management


Dr. Hamlet Medina

Chief Data Scientist at Criteo

Keynote speaker

Dr. Lisa Huang

Head of AI Investment Management, Fidelity Investments.

Intended Audiences

Asset Managers

Venture Investors


Product Developers


Finance & AI Researchers

While the intended audience for this event includes, asset managers, venture investors, entrepreneurs, product developers, regulators, finance and AI researchers. If you fall into another category but feel like you would be a good fit for this workshop, send us an email at and we can help clarify any questions you have.

Workshop Outline


Large Language Models (LLMs) & Generative Pre-trained Transformers (GPT)

  • Introduction to LLM: BARD, ChatGPT, and other large language models
  • Typical Applications of LLMs
  • How LLMs work
  • Using BARD/PaLM on the web through their API


Building Applications

  • Overview of Prompt Engineering
  • Building applications such as text generation, summarization, etc.
  • Few-shot learning with BARD
  • Introduction to embeddings
  • Overview of the BARD embeddings API and its usage


Risks Associated with LLMs

  • Understanding main risks with LLMs, such as, hallucinations, bias, consent and security
  • Methods for reducing the risks of Hallucinations, such as, retrieval augmentation, prompt engineering, and self-reflection
  • Methods to detect and address hallucinations, including reinforcement learning from human feedback (RLHF) and model-based approaches


Using LLMs for trading Federal Reserve Chair’s speeches

  • Why we chose the BARD family among the many available LLMs
  • Evaluating BARD’s native performance
  • Improving performance with embeddings
  • Worked example: computing sentiment ratings on public companies using embeddings
  • Test data: Video archives of the press conferences of the Federal Reserve Chair.
  • Backtesting a discretionary trading strategy using the sentiment output of a LLM.


Deploying LLMs in Production

  • Best Practices for Deploying LLMs in Production
  • Overview of alternative generative models such as ChatGPT, BART, Cohere, Alpaca, etc.