Projects

Selected projects I host and maintain. Click through to explore the live applications.

Careeon — Job Platform

Django 5

Visit site

A clean hiring hub for job seekers and employers. Candidates browse and filter roles by keywords, location, salary, job type, education, and remote options, then apply with a tailored cover letter and uploaded CV. Employers create a company profile, post openings with live previews, and review applicants in an organized dashboard. Search trends surface in-demand roles on the homepage, helping both sides discover opportunities faster. The interface is fast, accessible, and works without a heavy front-end framework, so pages load quickly and forms feel responsive.

Tech stack

Framework
Django 5 with custom user model and email-based authentication.
Runtime
Gunicorn (WSGI), ASGI path ready for async work.
Database
SQLite on Render currently; previously PostgreSQL on Azure.
Features
Server-side filtering, sorting, pagination.
Performance
Trending with atomic F() updates; select_related.
Operations
Dockerized; collectstatic during build; deployed on Render.
Careeon companies details
Careeon jobs
Careeon my applications
Careeon my jobs

PureSportsData — Live Soccer Data Platform

Next.js + FastAPI

Visit site

A live, role-based platform for capturing and managing soccer match data in real time. Scouts record events pitch-side on a streamlined portal. Admins assign scouts to fixtures, approve requests, manage teams and competitions, and track coverage. Leaderboards and points reward contribution, while expenses and assignments keep operations organized. Built for soccer today and structured to add other sports with minimal changes. Fast UI, clear permissions, and reliable cloud hosting make it practical on match day.

Tech stack

Frontend
Next.js (App Router), TypeScript, Tailwind UI components.
Backend
FastAPI with Pydantic; Uvicorn; Argon2 password hashing.
Database
PostgreSQL via psycopg2 with schema and indexes.
API Design
REST endpoints with clear request/response models and CORS.
Performance
Batched queries and indexed hot paths; parallel fetches.
DevOps
Dockerized, environment-driven config; deployed on Azure.
PureSportsData - Assign Scout
PureSportsData - Home
PureSportsData - Login
PureSportsData - Soccer Game

Automated Attack Surface Mapping (AASM)

FastAPI • Redis/Celery • Supabase • Docker

Final report (PDF)

Automated attack‑surface mapping for institutions and companies. Scans are triggered from a web UI, queued in Redis by a FastAPI service, executed by Celery workers, written to Postgres, and explored instantly in the UI. Built for scale, automation, and clear dashboards.

Tech stack

UI
Standalone web interface to start scans and browse results.
API
FastAPI REST (companies, domains, IPs, endpoints, vulnerabilities, scans); direct reads for fast queries.
Workers
Redis + Celery pipeline handles discovery tasks asynchronously.
Data
Supabase Postgres; screenshots in Supabase Storage with DB paths.
Discovery
Subdomains, endpoints, metadata, screenshots, ports (Masscan/Nmap), vulnerabilities (Nuclei); Icelandic wordlists + LLM assists.
Ops
Docker‑compose microservices; cloud‑ready and horizontally scalable.
AASM home dashboard
AASM scan results

Event‑Driven Microservices — Orders, Payments, Emails

Python • REST • RabbitMQ • SendGrid • Docker

Event‑driven system for orders, payments, inventory, and emails. Orders call peer services over REST for validation and emit RabbitMQ events that fan out to Payment and Email workers. Components are isolated, replayable, and horizontally scalable.

Tech stack

Services
Python REST per bounded context (Orders, Merchants, Buyers, Inventory, Payment, Email).
Messaging
RabbitMQ publish/subscribe for order lifecycle and payment results.
Email
SendGrid for transactional messages; workers scale horizontally.
Data
Per‑service stores (Postgres/Mongo/file) mounted with durable volumes.
Containers
Docker + Compose for local orchestration; easy multi‑instance workers.
Microservices event flow
Microservices event flow

Websites

Betri Fagmenn — Painting Company Website

Next.js + Tailwind • Hosted on Vercel

Visit site

Marketing website for the painting company Betri Fagmenn. Built with Next.js and Tailwind; deployed on Vercel for fast global delivery.

Process: drafted the structure and visuals, collected client feedback, iterated on content and design, then implemented and refined until requirements were met.

Tech stack

Framework
Next.js (App Router), TypeScript, Tailwind CSS.
Hosting
Vercel previews, image optimization, and CDN caching.
Betri Fagmenn website

Sushi Social — Restaurant Website

Next.js + Tailwind • Hosted on Vercel

Visit site

Marketing and information site for Sushi Social, a sushi restaurant in downtown Reykjavík. Built with the same stack as Betri Fagmenn for fast, maintainable content and clean presentation.

Process: initial design prototype, quick feedback loop with stakeholders, then iteration to fulfill content and brand requirements before deployment.

Tech stack

Framework
Next.js (App Router), TypeScript, Tailwind CSS.
Hosting
Vercel deployments with CDN and image optimization.
Sushi Social website

Algorithms in Go

Implementations of Raft consensus and a Chord-based overlay network with TCP and Protobuf.

Raft — Distributed Consensus in Go

Go

View on GitHub

Raft is a consensus algorithm that keeps a cluster of machines in agreement on a sequence of operations. It makes a replicated log behave like a single reliable log, even when some nodes crash or restart.

  • Leader election: Nodes start as followers. On timeouts they become candidates, vote, and elect a leader that coordinates the cluster.
  • Log replication: The leader appends client commands to its log and replicates them to followers via AppendEntries RPCs until a majority acknowledges.
  • Safety & commitment: Entries are committed only when stored on a majority and applied in order to the state machine, guaranteeing linearizable results.

This repository contains my Go implementation with timers, RPC stubs, persistence hooks, and tests to exercise elections and replication under failures.

Raft explanation diagram
Raft explanation diagram

Optimized Overlay Network — Chord-based

Go + TCP + Protobuf

View on GitHub

A simplified Chord overlay network with a central registry and many messenger nodes. Nodes form a structured P2P ring and route messages efficiently using finger tables.

Overview

  • Registry ↔ Messengers: messengers register/deregister via short-lived TCP connections; the registry keeps command connections open while awaiting responses.
  • Messenger ↔ Messenger: persistent TCP connections for peers in each node's finger table; auto-reconnect on failure for high-volume traffic.

Protocol

Key messages: Registration, Deregistration, NodeRegistry (finger tables), InitiateTask, TaskFinished, RequestTrafficSummary, ReportTrafficSummary.

Run

  • Registry: go run registry.go <port>
  • Messenger: go run messenger.go <host:port>
  • Default setup: ./run_macos.sh 8099 10 25000 (opens N terminals, one per process)

Commands: setup (finger tables), start (send packets), route (show fingers), list (registered), print/exit (messenger).

Logs are written to /logs: one file per messenger plus a registry.log.

Overlay network diagram
Overlay network diagram

3‑Process Kessels' Algorithm — Three Dogs and a Garden

Go — Mutual Exclusion

View on GitHub

This extends Kessels' two‑process algorithm to three independent processes (Alice, Bob, Charlie) so that at most one dog is in the garden at any time. The design uses only private flags (written by a single process, readable by all) and busy‑wait conditions to coordinate entry.

The state is split into two arbitration layers implemented with small two‑bit flag arrays: A and B coordinate Alice/Bob;AB (set by the winner of A/B) arbitrates against Charlie, who uses C. In the A/B layer, each process sets[0] to request and [1] as a tie‑breaker. They wait until either the other has no request or the tie‑breaker says it is safe. The winner then raises AB and competes with Charlie, which mirrors the same two‑flag pattern (AB vs C).

  • Mutual exclusion: only one layer winner proceeds to the second layer, and only one of {AB, C}can pass its await condition at a time.
  • Deadlock freedom: timeouts/tie‑breakers force progress—if both contend, one flips the tie bit and the await eventually releases a process.
  • Starvation freedom: each contender eventually becomes the winner of its layer due to alternating tie‑breakers, so every waiting process gets the garden.

Variables are owned dynamically: each process writes only its own flags (A by Alice, B by Bob, C by Charlie; the winner writes AB) while others read them. This preserves the "private writer" constraint from the assignment while enabling safe coordination for three processes.

3‑process Kessels algorithm diagram
Kessels algorithm timing/flags