Skip to content

a-fni/Algorithms-and-Data-Structures-Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Algorithms and Data Structures — Final Project

Algorithms and Data Structures course final project (A.Y. 2021-2022) — Score: 30/30 with honours

Note: the original repository, with the full commit history, has been lost. This repository contains the final source code, test cases, and provided documentation of the project.

Overview

This project was developed as the final assignment for the Algorithms and Data Structures course of the second year of the BEng in Computer Engineering at Politecnico di Milano.

The program implements a word-guessing game engine (similar in spirit to Wordle), fully written in C as a single source file, designed to be evaluated on an online competitive platform with tight performance constraints.

Game Mechanics

The program reads a word length k from standard input, then manages a session consisting of multiple games. Each game works as follows:

  1. A reference word and a maximum number of attempts are provided.
  2. On each attempt, the player submits a word of length k from the dictionary.
  3. The engine compares the guess against the reference word and prints a positional feedback string:
    • + — correct character in the correct position
    • | — character present in the word but in the wrong position
    • / — character not present (or exceeded in count)
  4. The engine then narrows down the set of admissible words — words still consistent with all accumulated constraints — and prints their count.
  5. The game ends with ok (win) or ko (loss).

Between and during games, two commands can be issued to add words to the dictionary:

  • +inserisci_inizio / +inserisci_fine — begin/end a word insertion block
  • +stampa_filtrate — print the current admissible words in lexicographic order
  • +nuova_partita — start a new game

Architecture & Implementation

The entire implementation is contained in a single main.c file, as required by the assignment. Key design decisions were driven by the need for high throughput on large inputs.

Data Structures

Structure Purpose
word_node_t — linked hash table (size 64) Main dictionary, keyed by first character
ptr_node_t — linked hash table (size 64) Admissible words set, storing pointers into the dictionary
char_node_t — ordered linked lists Per-character constraint lists (wrong positions per character)
struct rules Constraint accumulator: correct positions, excluded positions, character counts

The dictionary and admissible word sets both use linked hash tables indexed by a custom hash of the first character, covering the full alphanumeric alphabet plus - and _ (64 buckets total). Dictionary nodes use a single malloc for both the node struct and its string, reducing allocator overhead and improving cache locality.

Algorithms

  • Constraint filtering (is_admissible, filter): each candidate word is tested against the accumulated rule set — correct character positions, wrong-position exclusions, and minimum/exact character counts — to determine admissibility.
  • compare_and_learn: after each guess, produces the positional feedback string and updates the rule set incrementally, accumulating only the most restrictive constraints.
  • Rule merging (merge_rules, cnt_list_merge_and_clean): merges per-turn update rules into the global rule set using in-order merge on sorted linked lists, avoiding duplicate entries.
  • Lexicographic heap sort (heap_sort, build_max_heap, max_heapify): used to print admissible or dictionary words in sorted order via +stampa_filtrate.
  • I/O optimisation: all input/output uses getchar_unlocked / putchar_unlocked to minimise syscall overhead under the platform's strict time limits.

Complexity

Operation Complexity
Dictionary lookup / insertion O(n/64) average
Admissibility check O(k · |Σ|) per word
Full dictionary filter O(N · k · |Σ|)
Incremental filter (subsequent turns) O(|adm| · k · |Σ|)
Sorted print O(m log m) where m = admissible count

where k = word length, N = dictionary size, |Σ| = alphabet size (64).

Building

Dependencies: make and gcc.

# Build
make

# Clean build artefacts
make clean

Course Context

This project was part of the second-year curriculum at Politecnico di Milano. The assignment required submitting a single C source file to an automated online judge that evaluated both correctness and execution performance on large test cases. The final score obtained was 30/30 con lode (with honours).