Custom Thread Support Library in C
User-level threading means implementing the illusion of concurrency without touching the kernel. No pthread_create, no system calls for context switching. Just ucontext.h, some stack management, and a scheduler that lives entirely in userspace.
I built a Thread Support Library (TSL) for an OS course project. The library manages thread lifecycle (creation, yielding, termination, joining, cancellation) and supports two scheduling algorithms: First-Come-First-Served (FCFS) and random selection.
How it works
Each thread gets a Thread Control Block (tcb_t) storing its ucontext_t, allocated stack (default 64KB), state, and return value. The library maintains two queues: a ready queue for runnable threads and an end queue for terminated threads waiting to be joined.
Context switching works via swapcontext: the currently running thread’s registers and stack pointer get saved into its ucontext_t, and execution jumps to the next thread’s saved context. The scheduler calls swapcontext on every tsl_yield, on thread exit, and on tsl_cancel. It’s a trampoline: yield sends you to the scheduler, the scheduler picks the next thread, swapcontext jumps there.
#include "tsl.h"
void thread_function(void *arg) {
int id = tsl_gettid();
printf("Thread %d started\n", id);
// Perform some work
printf("Thread %d finished\n", id);
}
int main() {
tsl_init(ALG_FCFS);
tsl_create_thread(thread_function, NULL);
tsl_create_thread(thread_function, NULL);
tsl_yield(TSL_ANY);
return 0;
}
tsl_init takes a scheduling algorithm flag, initializes the queues, and sets up the main thread as the first TCB. From there, tsl_create_thread allocates a stack, uses makecontext to set the entry point, and adds the new thread to the ready queue. The tsl_yield(TSL_ANY) call in main voluntarily hands off the CPU to whatever thread the scheduler picks next.
Scheduling
FCFS is a simple dequeue from the front of the ready queue. Random scheduling calls remove_thread_at_position(rand() % queue_length). That’s it. Both work because cooperative scheduling (yield-based) means the scheduler only runs at well-defined points, so there are no race conditions between scheduling decisions and queue mutations.
One thing that bit me early: forgetting to re-add the currently running thread to the ready queue before selecting the next one. The thread just vanished. It ran fine, completed its work, but tsl_join would block forever waiting for a thread that was no longer anywhere the scheduler could find. The fix was obvious in retrospect: yield means “I’m still runnable, pick someone else,” not “I’m done.”
Thread joining and cancellation
tsl_join(tid) blocks the calling thread until the target thread terminates. It polls the end queue for the target TID. If the thread hasn’t finished, the calling thread yields and tries again next time it’s scheduled. Not the most efficient approach (busy-waiting through cooperative yielding) but correct for this use case.
tsl_cancel marks the target thread as cancelled, frees its stack, and removes it from the ready queue. If the cancelled thread is currently running, it yields immediately. A cancelled thread’s return slot gets set to a sentinel value so any waiting tsl_join unblocks cleanly.