You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Full VS Code experience, IntelliSense, multi-cursor
2MB bundle, poor mobile support
"I chose CodeMirror 6 over Monaco for the code editor, and this decision significantly impacts our frontend architecture. Monaco provides the full VS Code editing experienceโIntelliSense, go-to-definition, multi-cursor editingโbut at 2MB it would triple our bundle size and dominate our initial load time. For a coding practice platform, Monaco's IntelliSense is actually less useful than it sounds: users implement specific function signatures against known inputs, not exploring unfamiliar APIs. CodeMirror 6's 150KB footprint means our editor loads in under 500ms even on 3G connections. The mobile experience is where CodeMirror truly winsโits touch handling, virtual keyboard interaction, and viewport management are production-ready, while Monaco is effectively unusable on mobile. The trade-off is that power users won't get VS Code muscle memory shortcuts, but we can add common keybindings as CodeMirror extensions. For users who practice during commutes or breaks, mobile support is essentialโand Monaco doesn't offer it."
"I chose Zustand with the persist middleware over Redux or Context for state management. The key requirement driving this decision is code draft persistenceโusers must never lose their work if they accidentally close the browser or navigate away. Redux could achieve this with redux-persist, but that's 3 additional packages (redux, @reduxjs/toolkit, redux-persist) totaling 15KB+ and requiring action creators, reducers, and middleware configuration. Zustand's persist middleware is built-in and configures in 5 lines. Context API would require building persistence from scratch. The trade-off is Redux's richer devtools and middleware ecosystem, but for a coding practice app where state is straightforward (problems, code drafts, submissions), Zustand's simplicity wins. The real architectural benefit is that Zustand doesn't require Provider wrapping, so our component tree stays clean and we avoid the 'provider hell' of combining multiple contexts. For computed values like filtered problem lists, Zustand's selector pattern prevents unnecessary re-rendersโonly components subscribing to filters re-render when filters change."
Trade-off 3: Polling vs WebSocket for Status Updates
Approach
Pros
Cons
โ HTTP Polling
Stateless, proxy-friendly, simpler error handling
1s latency, more requests
โ WebSocket
Real-time updates, fewer requests
Stateful, reconnection logic needed
"I chose HTTP polling over WebSocket for submission status updates. For a code execution flow, the ~1 second polling interval is imperceptibleโusers expect 2-5 seconds for their code to run anyway. Polling simplifies our frontend architecture significantly: we use a simple useEffect with setInterval, handle errors with standard try/catch, and don't need reconnection logic for network interruptions. WebSocket would require connection state management, heartbeats, and graceful reconnection with exponential backoff. The real killer for WebSocket is corporate environmentsโmany companies' proxies block or interfere with WebSocket connections, but HTTP always works. The trade-off is slightly higher server load, but the backend caches status in Valkey making each poll sub-millisecond. If we later need streaming output (showing compilation errors as they happen), we can upgrade specific flows to WebSocket while keeping the simple polling for status. For 10K concurrent contest users polling every second, that's 10K requests/second to a cached endpointโeasily handled."
"I use TanStack Virtual for the problem list because LeetCode has 3000+ problems. Without virtualization, rendering 3000 table rows creates 3000 DOM nodesโcausing multi-second initial render, janky scrolling, and high memory usage. Virtualization renders only visible rows plus overscan buffer (~25 DOM nodes total). The trade-off is implementation complexity: we manage scroll position, calculate which items are visible, and position them with CSS transforms. But for a list that users scroll frequently while searching for problems, smooth 60fps scrolling is essential. The estimateSize of 56px allows fast initial render, and since all rows have identical height, we don't need dynamic measurement."
"I use react-resizable-panels for the split layout because users have different preferences for problem description vs code editor space. Some users want a narrow description panel to maximize coding area; others need full width for complex problem descriptions. The nested PanelGroup creates vertical split within the right panel (editor/results). Panel sizes persist to localStorage so users don't re-adjust every session. The trade-off is an additional dependency and DOM complexity, but this is a core UX pattern for IDE-style interfaces."
โก Deep Dive: Core Web Vitals Optimization
Target Metrics
Metric
Target
LeetCode Challenge
LCP (Largest Contentful Paint)
< 2.5s
Problem description + code editor
INP (Interaction to Next Paint)
< 200ms
Submit button, test runs
CLS (Cumulative Layout Shift)
< 0.1
Resizable panels, async content
Trade-off 4: LCP Optimization Strategy
Approach
Pros
Cons
โ Skeleton + streaming
Fast perceived load, progressive
Implementation complexity
โ Full SSR
Best LCP, SEO
Server complexity, hydration cost
โ Wait for all data
Simple
Slow LCP, poor perceived perf
"For LCP optimization, I chose skeleton screens with streaming data over full SSR or waiting for complete data. The LCP element on our problem page is the problem description panelโa large text block that users need to read before coding. With full SSR, we'd need a Node.js server rendering React, adding deployment complexity and hydration overhead. Instead, we render a skeleton instantly (LCP < 500ms), then stream the problem description from cache. The skeleton maintains the exact layout dimensions, preventing CLS when content arrives. For the code editor (150KB), we lazy-load it with a Suspense boundary showing an editor-shaped skeleton. Users perceive instant load because they see the layout immediately, even though the editor hasn't loaded. The trade-off is that we need careful skeleton design matching final layoutโany mismatch causes CLS."
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Resource Loading Priority โ
โ โ
โ Preload (in <head>): โ
โ โโโ Critical CSS (inline) โ
โ โโโ Main JS bundle (< 50KB gzipped) โ
โ โโโ Primary font (system-ui fallback) โ
โ โ
โ Prefetch (after LCP): โ
โ โโโ CodeMirror chunk (150KB) โ
โ โโโ Next problem (prediction based on current) โ
โ โโโ User's saved code from localStorage โ
โ โ
โ Lazy (on demand): โ
โ โโโ Submission history โ
โ โโโ Progress dashboard โ
โ โโโ Admin features โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
INP (Interaction to Next Paint) Optimization
Interaction
Target
Optimization
Submit button click
< 50ms
Optimistic UI, defer network
Language dropdown
< 30ms
Preloaded options, no network
Panel resize
0ms (60fps)
CSS transforms, no layout
Problem filter
< 100ms
In-memory filter, virtual list
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Submit Button Optimization โ
โ โ
โ Click โโโถ Immediate UI feedback (button disabled, spinner) โ
โ โโโถ State update (optimistic: "Submitting...") โ
โ โโโถ Network request (fire and forget) โ
โ โโโถ Transition to polling state โ
โ โ
โ Total time to visual feedback: < 16ms (one frame) โ
โ User perceives instant response โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
"INP measures the delay between user interaction and visual feedback. For the submit button, we update UI state synchronously before the network requestโthe button shows a spinner within 16ms (one frame). The actual submission happens asynchronously. For panel resizing, we use CSS transforms instead of changing width/height properties, enabling GPU-accelerated 60fps animation without triggering layout. The filter input uses in-memory filtering over the already-loaded problem list, avoiding any network latency."
Mobile App: React Native version for on-the-go practice
๐ Closing Summary
"I've designed a frontend architecture for an online judge optimized for Core Web Vitals. LCP targets < 2.5s through skeleton screens with streaming data and lazy-loaded CodeMirror (57KB initial load vs 200KB+ with Monaco). INP stays under 200ms via optimistic UI updatesโthe submit button shows feedback within 16ms, before network requests complete. CLS is prevented through reserved skeleton dimensions and persisted panel sizes. The architecture prioritizes perceived performance: users see a functional layout instantly, with the editor loading progressively. CodeMirror 6's 150KB bundle loads lazily while users read the problem description, making the editor ready by the time they need it. This performance-first approach means mobile users on 3G can start coding within 2 seconds."