Waiting for orchid

The right tool for the job

A compute workstation capable of highly-efficient running of large language models for artificial intelligence workflows and development is on its way.
announcement
news
Author

Ernest Guevarra

Published

21 July 2025

Modified

24 July 2025

By the end of July 2025, we are expecting delivery of a compute server/workstation capable of running large language models (LLMs) efficiently. This machine will allow for local LLM deployment to support development and use of artificial intelligence workflows in-house.

This is part of Oxford IHTM’s efforts to support its current students and its alumni in learning and applying modern tools for global health projects. Specifically, this machine will provide support for and teaching on application of computational sciences to existing global health challenges. This machine will serve the following purposes:

We plan to name this machine “orchid” and use flower names for any future compute workstations that we will add to our planned high-performance computing cluster.

This compute machine is resourced in such a way that it will perform very well for big data machine learning applications. Also, it has the capacity to host large language models locally thus ensuring that data protection and data privacy is observed in relation to use of LLMs and AI. Finally, this compute server will be setup such that all alumni can remotely via secure shell (SSH).

The current line-up of use cases planned and allocated for once the compute server is available and operational are the following:

We anticipate that the whole setup and scheme for alumni use will be up and running by the middle of August 2025. By then, we will be holding a community call to present and discuss the capabilities of the machine, how our alumni can use it for their projects, and support and training that we plan to provide to help alumni use this resource.