Highlights from the Fastcode Programming Challenge
Congratulations to all participants in our inaugural programming competition at PPoPP25
Congratulations, competitors!
The inaugural Fastcode Programming Challenge (FCPC) was a grand success. We received 18 code submissions from 46 registered participants, and first prizes were awarded to
Track 1 BFS: Team RGBSM (IIT Tirupati)
Track 1 SSSP: Team RELAXed Traversers (Queen's University Belfast, Chalmers University of Technology, and University of Gothenburg)
Track 2 (AI/ML): Team ParaCoder (Shenzhen University, Shenzhen Institutes of Advanced Technology, Shenzhen Polytechnic University, Tencent, Hong Kong Baptist University, Peng Cheng Laboratory, and Guangdong Institute of Intelligence Science and Technology)
More results are here. Congratulations to the participating teams, and heartfelt thanks to the volunteers who helped organize the event. We will organize more programming competitions soon! Sign up here to stay informed.
In today’s post, I’ll summarize the two tracks of the FCPC. Then, in a future post, I will share highlights from the winning teams.
Track 1: humans diving into the details of a narrowly specified problem
Brian Wheatman, the Chair of Track 1, said that the “human track” was designed to (1) expose people to software performance engineering (SPE), and (2) pose a problem with specific, subtle differences from its traditional form, in order to allow people to contribute to the field.
Brian recounted the fun of exposing people to SPE:
To me the biggest thing was the enthusiasm, the participation, and the excitement. I think the participants learned a lot as they went from their first solution to their last, and my big takeaway is how much fun people had. So many people came up to me afterwards, saying, “Are the problems gonna stay up? Can we keep working on them? When's the next competition gonna be held?”
He elaborated on the essence of SPE:
Software performance engineering is rarely about giant breakthroughs. Yes, there are tricks that can be done that will give big performance wins. But, with most problems that people care about, those tricks will already be done. Yes, when you first do vectorization, you get a big boost. When you first do parallelization, you get a big boost. When you first implement certain algorithms, you get these big boosts. But then a large part of performance engineering is just the iteration of slowly improving your code. It's just doing your due diligence, and doing it well. And these techniques build on each other, and build on each other, and you can end up with something much, much better.
Finally, Brian noted the transferable skills that people learn by diving into the details of how to implement a problem like single-source shortest path (SSSP) on a multicore machine.
So it's hard. In some fields, you implement a new algorithm, and that's it. You go from “it doesn’t work” to “it works!” with one new algorithm. Software performance engineering is much more — I don't want to say grind, but I kind of want to say grind — of this iterative process of making it better and better as you go. The skills that people learned from iteratively improving their solution to our slightly tweaked SSSP are absolutely real skills that they will be able to apply to all the other problems that they solve.
Track 2: training/prompting AI models to be broadly good at parallel programming
Xuhao Chen, Chair of Track 2, contrasted the “AI track” with its human counterpart. Whereas Track 1 was about people solving narrowly specified problems (SSSP and its unweighted version, BFS), Track 2 was about training or prompting large language models (LLMs) to be broadly good at parallel algorithm programming.
I asked Xuhao if we could also use LLMs to dive into a specific problem like SSSP, and he explained that Track 2 avoided the narrow approach in order to prevent model overfitting. In other words, we trust that people will transfer what they learn from SSSP when they go out into the wider world; but we require AI models to prove that they can handle the wider world by challenging them with the Parallel Code Evaluation (ParEval) Benchmark, which uses a curated suite of representative problems to evaluate the ability of LLMs to write parallel code.
Xuhao described the landscape of LLMs:
Currently there are two types of LLMs. First, there are private models like OpenAI and ChatGPT, where you cannot access or change the model parameters. The model is already trained, and you can only prompt it and ask questions. Second, there are open-source models like DeepSeek and CodeLlama. Here, you can download the model and fine-tune it by changing the parameters.
The primary goal of Track 2 is for people to come up with some interesting prompting workflows or fine-tuning strategies that improve the coding performance of the existing models.
Finally, Xuhao noted how quickly the landscape is changing:
In the past few months, since we announced the competition, ChatGPT and DeepSeek have both improved a lot. Previously, they did pretty badly, so it was easy to improve them. But now it's getting much harder. So what we’re seeing in the competition is that they did improve a little bit over the state-of-the-art ChatGPT model, which is very powerful, but it’s a very marginal improvement.
Help design the next FCPC
What problem would make a good challenge for the next programming competition? Why? If you have an idea for the next FCPC, please let me know.