[{"content":"A distributed platform for managing, executing, and proxying MCP (Model Context Protocol) servers. The system provides backend components for LLM tool discovery, authentication, and deployment across containerized environments.\nOverview MCP Server Hub manages multiple MCP servers through a microservices architecture. The platform provides centralized control for MCP server instances with support for distributed deployment.\nKey Features MCP Registry: Registry for discovering and managing MCP server instances Execution Engine: Deploys and runs MCP servers on Nomad or Kubernetes, with Vault and Consul integration MCP Proxy: Gateway that routes requests to MCP servers and integrates with MCP clients (Cursor, LibreChat, Claude Desktop) Web Interface: Admin portal for managing profiles, servers, and secrets Protocol Versioning: Support for MCP protocol versioning across different server implementations Security: Integration with HashiCorp Vault for secret management Technical Architecture Microservices architecture with the following components:\nAPI Gateway: Authentication, rate limiting, and request routing MCP Registry: MCP server discovery and registration (separate from Consul service registry) Service Registry: Consul for service discovery and health checking Execution Layer: Nomad/Kubernetes orchestration for MCP server deployment Message Queue: Redis/Kafka for asynchronous processing and event streaming Data Layer: PostgreSQL for configurations and metadata Secret Management: HashiCorp Vault for credentials and secrets Go Redis Kafka PostgreSQL HashiCorp Vault Consul Docker Nomad Kubernetes MCP Protocol Use Cases Centralized management of MCP servers across development teams Multi-tenant deployments with isolated MCP server instances Managing MCP server configurations across different environments Tracking MCP server usage and tool invocations Technical Highlights Distributed architecture with horizontal scalability Event-driven design with Redis/Kafka messaging RESTful and gRPC APIs Load balancing and failover support Monitoring integration with Prometheus/Grafana Project Context Developed as a freelance/contract project (April 2024 - Present). The platform manages MCP server infrastructure, handling tool discovery, authentication, and orchestration.\n","permalink":"https://yinebebt.com/projects/mcp-server-hub/","summary":"\u003cp\u003eA distributed platform for managing, executing, and proxying MCP (Model Context Protocol) servers. The system provides backend components for LLM tool discovery, authentication, and deployment across containerized environments.\u003c/p\u003e","title":"MCP Server Hub"},{"content":"A dynamic link service built as an alternative to Firebase Dynamic Links, providing URL shortening, platform detection, social media link previews, and analytics for mobile and web applications.\nOverview A link management system that routes users to different destinations based on their platform (iOS, Android, Web). Provides link previews for social media sharing and analytics for tracking user behavior.\nKey Features URL Shortening: Generate short links with custom identifiers for tracking Platform Detection: Detects user platform (iOS/Android/Web) and redirects to appropriate destination Link Previews: Open Graph meta tags for social media previews Mobile Deep Linking: Deep linking into mobile apps with fallback to web or app store Analytics: Track clicks, geographic distribution, device types, and user engagement Multi-Language Support: Language routing for internationalized applications RESTful API: API for integration Technical Architecture Backend: Go-based microservices Caching: Redis for link resolution Metadata: Dynamic HTML generation with Open Graph tags Database: PostgreSQL for link storage API Example Generate Short Link POST /generate-link Content-Type: application/json { \u0026#34;type\u0026#34;: \u0026#34;product\u0026#34;, \u0026#34;link_id\u0026#34;: \u0026#34;product_123\u0026#34;, \u0026#34;user_id\u0026#34;: \u0026#34;user_456\u0026#34;, \u0026#34;language\u0026#34;: \u0026#34;ENG\u0026#34;, \u0026#34;data_id\u0026#34;: \u0026#34;optional_data\u0026#34; } Response:\n{ \u0026#34;short_url\u0026#34;: \u0026#34;https://link.example.com/M9PoxcPb0D8d_HD7z\u0026#34; } Link Redirection GET /{shortId} Returns a 302 Redirect to the mobile app (if installed) or fallback web URL, with platform detection for iOS/Android/Web.\nSocial Media Preview Generates link previews when shared on social platforms (Facebook, Twitter, LinkedIn, WhatsApp) with custom images, titles, and descriptions.\nGo PostgreSQL Redis REST API Docker Nginx Use Cases Product links that open in mobile apps or web Tracking article engagement with short links Campaign performance monitoring Mobile app deep linking Technical Highlights Redis caching for fast link resolution Fallback mechanisms for reliability Monitoring with Prometheus/Grafana Project Context Built for a social commerce platform (June 2024 - Present). The service replaced Firebase Dynamic Links and handles link interactions for mobile and web applications, providing analytics on user behavior.\n","permalink":"https://yinebebt.com/projects/dynamic-link-service/","summary":"\u003cp\u003eA dynamic link service built as an alternative to Firebase Dynamic Links, providing URL shortening, platform detection, social media link previews, and analytics for mobile and web applications.\u003c/p\u003e","title":"Dynamic Link Service"},{"content":"A billing management system for a ride-hailing platform, handling fare calculations, payment processing, and financial reconciliation.\nOverview Backend system for managing billing operations of a ride-hailing platform. Built with Go and CockroachDB, processing ride transactions, calculating fares, managing driver payouts, and handling financial reconciliation.\nKey Features Fare Calculation: Dynamic pricing based on distance, time, demand, and service type Payment Processing: Integration with payment gateways for charges and driver payouts Billing Reconciliation: Automated matching of rides, payments, and settlements Multi-currency Support: Transactions across different currencies Invoice Generation: Automated invoice creation Dispute Management: Workflow for payment disputes and refunds Financial Reporting: Reports for revenue tracking and audits Technical Architecture Distributed architecture with the following components:\nDatabase: CockroachDB for distributed, consistent data storage Microservices: Separate services for fare calculation, payments, invoicing, and reconciliation Event-Driven Processing: Asynchronous handling for payment webhooks and transaction updates API Gateway: Routing and authentication for billing operations Background Workers: Scheduled jobs for reconciliation and reporting Go CockroachDB REST API gRPC PostgreSQL Protocol Docker Redis Core Functionality Fare Calculation Engine Base fare + distance-based pricing Time-based charges for wait times and trip duration Surge pricing during peak hours Promotional discount application Service type multipliers (economy, premium, luxury) Payment Processing Multiple payment method support (cards, wallets, cash) Automatic retry logic for failed transactions PCI-compliant card data handling Split payments for shared rides Scheduled driver payouts Reconciliation System Daily financial reconciliation reports Automated matching of rides to payments Discrepancy detection and flagging Commission calculation for platform revenue Tax calculation and reporting Technical Highlights Distributed transactions with strong consistency Idempotent API design for payment processing Audit logging for financial compliance Automated testing Payment gateway failure handling Project Context Developed at 2F Capital (July 2022 - February 2023) for a ride-hailing platform\u0026rsquo;s billing system. Collaborated with senior engineers to design services, refactor code, and maintain financial accuracy.\nKey experience gained:\nFinancial system design and reconciliation Distributed databases (CockroachDB) Payment processing systems Data consistency in distributed environments ","permalink":"https://yinebebt.com/projects/ride-plus/","summary":"\u003cp\u003eA billing management system for a ride-hailing platform, handling fare calculations, payment processing, and financial reconciliation.\u003c/p\u003e","title":"Ride-Hailing Billing System"},{"content":"A Model Context Protocol server providing mathematical operations and calculator functionality for AI agents.\nFeatures Mathematical operations with proper precedence and scientific notation Random number generation with probability distributions Mathematical constants (π, e, φ, √2) Interactive prompts for math problems and explanations Technical Stack Go MCP Protocol JSON-RPC Links GitHub\n","permalink":"https://yinebebt.com/projects/mcp-calculator/","summary":"\u003cp\u003eA Model Context Protocol server providing mathematical operations and calculator functionality for AI agents.\u003c/p\u003e\n\u003ch2 id=\"features\"\u003eFeatures\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eMathematical operations\u003c/strong\u003e with proper precedence and scientific notation\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eRandom number generation\u003c/strong\u003e with probability distributions\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eMathematical constants\u003c/strong\u003e (π, e, φ, √2)\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eInteractive prompts\u003c/strong\u003e for math problems and explanations\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"technical-stack\"\u003eTechnical Stack\u003c/h2\u003e\n\u003cdiv class=\"tech-badges\"\u003e\n  \u003cspan class=\"tech-badge\"\u003eGo\u003c/span\u003e\n  \u003cspan class=\"tech-badge\"\u003eMCP Protocol\u003c/span\u003e\n  \u003cspan class=\"tech-badge\"\u003eJSON-RPC\u003c/span\u003e\n\u003c/div\u003e\n\u003ch2 id=\"links\"\u003eLinks\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/yinebebt/mcp-calculator-server\"\u003eGitHub\u003c/a\u003e\u003c/p\u003e","title":"MCP Calculator Server"},{"content":"Ethiopian Calendar (ባሕረ-ሐሳብ) desktop app, CLI, and HTTP API for date conversion and religious festival lookup.\nFeatures GUI desktop app built with Fyne — date converter and Bahire-Hasab festival lookup Convert between Ethiopian and Gregorian calendars Ethiopian religious festival and fasting dates (Bahire-Hasab) Bundled Ethiopic font for Amharic rendering CLI, HTTP API, and Go library Pre-built binaries for macOS, Linux, and Windows Technical Stack Go Fyne GUI CLI REST API Usage # Download (no Go required) # See https://github.com/yinebebt/ethiocal/releases # Or install with Go go install github.com/yinebebt/ethiocal@latest # Launch GUI ethiocal # CLI: get religious dates ethiocal bahir 2017 # CLI: convert dates ethiocal convert gtoe 2025 2 2 ethiocal convert etog 2017 5 25 # HTTP API ethiocal --server Links GitHub · Download · Documentation\n","permalink":"https://yinebebt.com/projects/ethiocal/","summary":"\u003cp\u003eEthiopian Calendar (ባሕረ-ሐሳብ) desktop app, CLI, and HTTP API for date conversion and religious festival lookup.\u003c/p\u003e\n\u003ch2 id=\"features\"\u003eFeatures\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eGUI desktop app\u003c/strong\u003e built with Fyne — date converter and Bahire-Hasab festival lookup\u003c/li\u003e\n\u003cli\u003eConvert between Ethiopian and Gregorian calendars\u003c/li\u003e\n\u003cli\u003eEthiopian religious festival and fasting dates (Bahire-Hasab)\u003c/li\u003e\n\u003cli\u003eBundled Ethiopic font for Amharic rendering\u003c/li\u003e\n\u003cli\u003eCLI, HTTP API, and Go library\u003c/li\u003e\n\u003cli\u003ePre-built binaries for macOS, Linux, and Windows\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"technical-stack\"\u003eTechnical Stack\u003c/h2\u003e\n\u003cdiv class=\"tech-badges\"\u003e\n  \u003cspan class=\"tech-badge\"\u003eGo\u003c/span\u003e\n  \u003cspan class=\"tech-badge\"\u003eFyne\u003c/span\u003e\n  \u003cspan class=\"tech-badge\"\u003eGUI\u003c/span\u003e\n  \u003cspan class=\"tech-badge\"\u003eCLI\u003c/span\u003e\n  \u003cspan class=\"tech-badge\"\u003eREST API\u003c/span\u003e\n\u003c/div\u003e\n\u003ch2 id=\"usage\"\u003eUsage\u003c/h2\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"c1\"\u003e# Download (no Go required)\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"c1\"\u003e# See https://github.com/yinebebt/ethiocal/releases\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"c1\"\u003e# Or install with Go\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003ego install github.com/yinebebt/ethiocal@latest\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"c1\"\u003e# Launch GUI\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eethiocal\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"c1\"\u003e# CLI: get religious dates\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eethiocal bahir \u003cspan class=\"m\"\u003e2017\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"c1\"\u003e# CLI: convert dates\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eethiocal convert gtoe \u003cspan class=\"m\"\u003e2025\u003c/span\u003e \u003cspan class=\"m\"\u003e2\u003c/span\u003e \u003cspan class=\"m\"\u003e2\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eethiocal convert etog \u003cspan class=\"m\"\u003e2017\u003c/span\u003e \u003cspan class=\"m\"\u003e5\u003c/span\u003e \u003cspan class=\"m\"\u003e25\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"c1\"\u003e# HTTP API\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eethiocal --server\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003ch2 id=\"links\"\u003eLinks\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/yinebebt/ethiocal\"\u003eGitHub\u003c/a\u003e · \u003ca href=\"https://github.com/yinebebt/ethiocal/releases\"\u003eDownload\u003c/a\u003e · \u003ca href=\"https://pkg.go.dev/github.com/yinebebt/ethiocal\"\u003eDocumentation\u003c/a\u003e\u003c/p\u003e","title":"Ethiocal - Ethiopian Calendar"},{"content":"Practical implementation of Hexagonal Architecture (Ports and Adapters) in Go. A learning resource for building scalable, maintainable systems with clean architecture principles.\nRead the article\nFeatures Core business logic isolated from frameworks Multiple adapters: REST API (Gin), PostgreSQL, SQLite Unit tests for the core service layer Clear separation of concerns Technical Stack Go Clean Architecture Gin PostgreSQL Structure /internal /adapter /graphql /rest /repository (PostgreSQL, SQLite) /templates /core /entity /port /service Links GitHub • Article • Website\n","permalink":"https://yinebebt.com/projects/hexagonal-architecture/","summary":"\u003cp\u003ePractical implementation of Hexagonal Architecture (Ports and Adapters) in Go. A learning resource for building scalable, maintainable systems with clean architecture principles.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://yinebebt.com/post/hexagonal-architecture/\"\u003eRead the article\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"features\"\u003eFeatures\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eCore business logic isolated from frameworks\u003c/li\u003e\n\u003cli\u003eMultiple adapters: REST API (Gin), PostgreSQL, SQLite\u003c/li\u003e\n\u003cli\u003eUnit tests for the core service layer\u003c/li\u003e\n\u003cli\u003eClear separation of concerns\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"technical-stack\"\u003eTechnical Stack\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eGo\u003c/li\u003e\n\u003cli\u003eClean Architecture\u003c/li\u003e\n\u003cli\u003eGin\u003c/li\u003e\n\u003cli\u003ePostgreSQL\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"structure\"\u003eStructure\u003c/h2\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-txt\" data-lang=\"txt\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e/internal\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  /adapter\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    /graphql\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    /rest\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    /repository (PostgreSQL, SQLite)\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    /templates\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  /core\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    /entity\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    /port\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    /service\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003ch2 id=\"links\"\u003eLinks\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/yinebebt/hexagonal-architecture\"\u003eGitHub\u003c/a\u003e • \u003ca href=\"https://yinebebt.com/post/hexagonal-architecture/\"\u003eArticle\u003c/a\u003e • \u003ca href=\"https://yinebebt.com/projects/hexagonal-architecture/\"\u003eWebsite\u003c/a\u003e\u003c/p\u003e","title":"Hexagonal Architecture Demo"},{"content":"When someone asks me how to start learning Go, the first challenge is not what to learn—but where to begin. There are many resources available, but without a structure, it becomes overwhelming.\nThis is a list of resources I’ve personally used while learning Go and during day-to-day work. Note that this list is based on personal preference and is not guaranteed to be complete.\nDifferent people learn differently—some prefer reading, others videos or hands-on practice. This list includes a mix of all three.\nThe key idea: start simple, build gradually, and connect concepts as you progress.\n1. Start Simple (Beginner) Interactive \u0026amp; Basics\nGo Playground — run Go code in the browser Go by Example — concise examples for core concepts Official Documentation\nGetting Started — official entry point Effective Go — idiomatic patterns Go Specification — language definition (reference-level) Standard Library \u0026amp; Blog\nStandard Library — explore built-in packages Go Blog — design insights and updates 2. Structured Learning Practice-Oriented\nLearn Go with Tests — hands-on, test-driven approach. This is one of the most practical ways to learn Go by writing tests and building incrementally. Style \u0026amp; Best Practices\nGo Style Guide Style Decisions Best Practices Understanding idiomatic Go becomes important once you grasp the basics.\nBooks (Deeper Understanding)\nThe Go Programming Language — Alan A. A. Donovan \u0026amp; Brian W. Kernighan. A solid reference once you\u0026rsquo;re comfortable with the basics. 3. Video Courses Go Programming (Matt Holiday) Go Tutorial Series (Tech School) Use videos if you prefer guided explanations alongside coding.\n4. Communities Gophers Slack Google Groups: golang-dev golang-nuts Communities are useful when you get stuck or want to stay updated.\nThanks for reading. I’ll keep updating this list as I learn more.\n","permalink":"https://yinebebt.com/post/go-resources/","summary":"\u003cp\u003eWhen someone asks me how to start learning Go, the first challenge is not \u003cem\u003ewhat\u003c/em\u003e to learn—but \u003cem\u003ewhere to begin\u003c/em\u003e. There are many resources available, but without a structure, it becomes overwhelming.\u003c/p\u003e\n\u003cp\u003eThis is a list of resources I’ve personally used while learning Go and during day-to-day work. Note that this list is based on personal preference and is not guaranteed to be complete.\u003c/p\u003e","title":"Go Learning Resources"},{"content":"I wrapped up my role at ChipChip on April 16, 2026.\nFor two years, I worked across backend development and operations.\nThis is not a victory lap. It is a reflection on what that period actually looked like: integrating core services, debating technical decisions with teammates, fixing operational gaps, and learning how to ship without breaking trust.\nScope of Work At ChipChip, my day-to-day work was broad. In one week, I could be debugging websocket authentication behavior, reviewing API contracts with teammates, adjusting deployment and monitoring setup, and then jumping into maintenance work. That mix changed my engineering mindset. I stopped seeing delivery as \u0026ldquo;feature merged\u0026rdquo; and started seeing it as \u0026ldquo;team can run and change this safely in production.\u0026rdquo;\nKey Systems and Lessons The most meaningful work was specific system work with real constraints.\nIn messaging, a lot of implementation sat around Tinode integration patterns: websocket flows for chat clients, REST pass-through for user and topic management, and authentication behavior that did not always map neatly to product expectations. For context, this messaging service powers user conversations and related chat flows in the product. One recurring complexity was auth strategy and boundaries. Some decisions looked simple from the outside, but became deep technical debates inside the team: what should be enforced on the chat server side, what should stay in the ChipChip service layer, and where validation should live so we did not expose too much or duplicate logic.\nMigration work was another concrete challenge. User migration paths from existing PostgreSQL-backed systems into messaging flows were sensitive and risky. The work was not just writing scripts. It required carefully sequencing data moves, understanding identity mapping, and minimizing disruption when assumptions in old data did not hold.\nOn the ops side, I worked through monitoring and reliability concerns around messaging services, including exporter-based metrics flow, Prometheus/Grafana visibility, and OpenObserve for observability. That taught me that \u0026ldquo;we have logs\u0026rdquo; is not observability. If metrics and logs are not actionable when things break, they are just noise.\nOutside messaging, I also contributed to data and analytics efforts around ClickHouse and Superset. That work improved feedback loops for product and engineering decisions. When teams can see behavior clearly, debates become more factual and less opinion-driven.\nAnd in parallel, dynamic link service work gave me a practical lesson in replacing dependencies with internal ownership. Building and maintaining a Firebase Dynamic Links alternative sounds like a feature, but in practice it is infrastructure work: routing reliability, edge-case handling, and analytics integrity under production traffic.\nHow I Worked With the Team One of my biggest personal improvements was collaboration quality under ambiguity. I became more deliberate in early conversations, especially when requirements were not fully clear. Instead of taking assumptions into code, we pushed for short alignment loops first: expected behavior, failure modes, rollout plan, and ownership after release. That habit reduced rework and made reviews more productive.\nI also learned that technical debate is not a problem by itself. Some of our best decisions came from hard discussions about tradeoffs: speed vs maintainability, strictness vs flexibility, short-term patching vs structural fixes. The key is whether the debate produces clearer decisions and shared ownership. When it does, the system gets better.\nThe challenging part was balancing everything at once. Context switching was a constant tax: feature and operations work often collided in the same week. If I did not protect deep work windows and make priority tradeoffs explicit, quality dropped quickly.\nWhat Changed These two years changed my defaults. I now optimize for maintainability as part of delivery, not as a \u0026ldquo;later\u0026rdquo; activity. I think about failure modes earlier, not after release. And I value team trust as an engineering multiplier: clear communication and predictable ownership are not soft skills around engineering, they are part of engineering.\nGrowth came less from perfect projects and more from owning imperfect systems responsibly. That is the sentence I would use to summarize this chapter.\nIt meant showing up for both feature delivery and operational reality, improving systems I did not originally design, handling technical disagreement without losing momentum, and learning to make tradeoffs explicit before they become production problems.\nI am grateful for the people I worked with during this period. My teammates and senior engineers helped shape how I think and build. Their feedback and trust pushed me to grow beyond just writing code. The wins were real, the misses were real, and both were necessary for growth.\nWhat I Am Building Now Since May 2025, alongside and after this chapter, I have been building backend systems: MCP-related infrastructure, agent workflows, and a Software Factory platform.\nMy current work includes MCP server infrastructure, agent execution and orchestration pipelines, and building the Software Factory for workflow-driven delivery and validation.\nI am open to roles and projects where I can contribute end-to-end to this kind of work: architecture, implementation, operations, and long-term maintainability.\nBye for now.\n","permalink":"https://yinebebt.com/post/two-years-at-chipchip/","summary":"\u003cp\u003eI wrapped up my role at ChipChip on April 16, 2026.\u003cbr\u003e\nFor two years, I worked across backend development and operations.\u003c/p\u003e\n\u003cp\u003eThis is not a victory lap. It is a reflection on what that period actually looked like: integrating core services, debating technical decisions with teammates, fixing operational gaps, and learning how to ship without breaking trust.\u003c/p\u003e","title":"Two Years at ChipChip: Building, Learning, and Growing"},{"content":"Most deep linking solutions are SaaS products with usage-based pricing. I wanted something simpler: a Go library I could drop into any project and run on my own infrastructure.\nBackground This started as an internal service at work. We needed short links with OG preview pages for sharing content on social platforms, plus app store redirects for mobile users who didn\u0026rsquo;t have the app installed. The usual flow: user clicks a link, sees a preview page with Open Graph meta tags (so Slack, Twitter, Telegram render a nice card), and gets redirected to the app or a fallback URL.\nIt was tightly coupled to our infrastructure: internal auth middleware, specific template paths, and assumptions about the deployment environment. I decided to extract and clean it up as a standalone library that anyone could use.\nWhat It Does deeplink is a Go library that provides:\nShort link generation: POST /shorten with a JSON payload, get back a short URL Click tracking: every visit increments a counter, queryable per link OG preview pages: customizable HTML templates with Open Graph meta tags for rich link previews Platform-aware redirects: detect iOS/Android via User-Agent and redirect to the appropriate app store Pluggable processors: define how different link types are handled during generation It works in two modes: as a library you mount on your own HTTP mux, or as a standalone server with Redis storage.\nTwo runtime dependencies: go-nanoid for ID generation and go-redis for the Redis backend. That\u0026rsquo;s it.\nUse Cases Concretely, this is useful if you need any combination of the following.\nShort links with app store redirects and preview pages. Generate short URLs, serve OG preview pages for rich cards on social platforms, and redirect mobile users to the appropriate app store. You control the data, there\u0026rsquo;s no usage limit, and it runs on your infrastructure.\nCampaign link tracking. Generate short links for campaigns and track clicks per link. List all links by type to see what\u0026rsquo;s performing.\nMobile app deep linking. Serve apple-app-site-association and assetlinks.json from the /.well-known/ route. Configure store URLs and the library handles platform detection and redirects.\nAny service that needs short URLs with metadata. The processor interface is generic. The built-in RedirectProcessor handles simple URL redirects, but you can implement your own for any link type.\nDesign Decisions A few choices worth explaining:\nLibrary-first, server second. The core package (deeplink) has no main function. You create a Service, register processors, and call Handler() to get an http.Handler. The standalone server in cmd/deeplink is just one way to use it. It reads config from environment variables and wires up Redis. If you already have an HTTP server, you mount the handler alongside your routes:\nservice, _ := deeplink.New(deeplink.Config{ BaseURL: \u0026#34;https://link.example.com\u0026#34;, Store: deeplink.NewMemoryStore(), }) service.Register(deeplink.RedirectProcessor{}) mux := http.NewServeMux() mux.Handle(\u0026#34;/\u0026#34;, service.Handler()) mux.HandleFunc(\u0026#34;GET /hello\u0026#34;, yourHandler) http.ListenAndServe(\u0026#34;:8090\u0026#34;, mux) Pluggable processors. Different link types need different handling. A redirect link just validates and stores a URL. A product share link might fetch metadata from an API. Instead of adding if/else branches for each type, the Processor interface lets you register handlers:\ntype Processor interface { Type() string Process(ctx context.Context, link *Link) error } The built-in RedirectProcessor covers the common case. For anything custom, implement the interface and register it. There\u0026rsquo;s a working example in the repo.\nTwo storage backends. Redis for production, in-memory for testing and development. The Store interface is small enough that adding PostgreSQL or SQLite would be straightforward:\ntype Store interface { Save(ctx context.Context, id string, payload *Link) error Get(ctx context.Context, id string) (*Link, error) IncrClick(ctx context.Context, id string) (int64, error) Clicks(ctx context.Context, id string) (int64, error) List(ctx context.Context, linkType, env string, cursor uint64, count int64) ([]LinkInfo, uint64, error) } Quick Start Start Redis and the server:\ndocker compose up -d go run ./cmd/deeplink Create a short link:\ncurl -X POST http://localhost:8090/shorten \\ -H \u0026#39;Content-Type: application/json\u0026#39; \\ -d \u0026#39;{\u0026#34;type\u0026#34;:\u0026#34;redirect\u0026#34;,\u0026#34;url\u0026#34;:\u0026#34;https://example.com/docs\u0026#34;,\u0026#34;title\u0026#34;:\u0026#34;Docs\u0026#34;}\u0026#39; {\u0026#34;short_url\u0026#34;: \u0026#34;http://localhost:8090/aBcDeFgHiJkLmNoPq\u0026#34;} Open the short URL in a browser, you\u0026rsquo;ll see an OG preview page. Check the click count:\ncurl http://localhost:8090/links/redirect/aBcDeFgHiJkLmNoPq {\u0026#34;short_link\u0026#34;: \u0026#34;http://localhost:8090/aBcDeFgHiJkLmNoPq\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;https://example.com/docs\u0026#34;, \u0026#34;clicks\u0026#34;: 3} What\u0026rsquo;s Next This is v0.1.0. The core works but there\u0026rsquo;s room to grow:\nTTL support: links don\u0026rsquo;t expire yet Richer analytics: click counts are there, but referrer, geo, and device breakdowns are not More storage backends: PostgreSQL and SQLite are natural additions Issues and PRs are welcome.\nLinks GitHub: github.com/yinebebt/deeplink Go Reference: pkg.go.dev/github.com/yinebebt/deeplink Thanks for reading.\n","permalink":"https://yinebebt.com/post/deeplink-v0.1.0/","summary":"\u003cp\u003eMost deep linking solutions are SaaS products with usage-based pricing. I wanted something simpler: a Go library I could drop into any project and run on my own infrastructure.\u003c/p\u003e","title":"Building an Open-Source Deep Link Service in Go"},{"content":"In SaaS applications, authentication and authorization are critical. As your platform grows to serve multiple customers, each tenant wants to use their own identity provider (IDP), users need automatic provisioning from corporate directories, and access control must work across tenants.\nThis guide covers OAuth 2.0/OIDC fundamentals, multi-tenant authentication patterns, SCIM-based directory synchronization, and practical implementation details.\nUnderstanding OAuth 2.0 and OIDC What is OAuth 2.0? OAuth 2.0 is an authorization framework that enables applications to access resources on behalf of users without sharing passwords. Think of it as a valet key for your digital resources - you give limited access without exposing your master credentials.\nKey Concepts:\nResource Owner: The user who owns the data Client (Application): The app requesting access Authorization Server: Issues tokens after authentication (e.g., Google\u0026rsquo;s OAuth service) Resource Server: Hosts the protected resources (e.g., Google Photos API) The Authorization Code Flow:\n1. User clicks \u0026#34;Sign in with Google\u0026#34; 2. App redirects to authorization endpoint with client_id, redirect_uri, scope, state 3. User authenticates and grants consent 4. Authorization server redirects back with authorization code 5. App exchanges code for tokens (access token, refresh token, ID token) 6. App uses access token to call APIs What is OIDC (OpenID Connect)? OIDC extends OAuth 2.0 to standardize identity verification. While OAuth 2.0 answers \u0026ldquo;what permissions do you have?\u0026rdquo;, OIDC also answers \u0026ldquo;who are you?\u0026rdquo;\nOIDC adds:\nID Token: JWT containing user identity (email, name, etc.) UserInfo Endpoint: Standardized endpoint for user profile data Standardized Scopes: openid, profile, email, groups Bearer Tokens OAuth 2.0 uses bearer tokens - whoever carries the token can use it. Protect tokens by using HTTPS, storing securely (httpOnly cookies preferred), implementing expiration/rotation, and never logging them.\nMulti-Tenant Authentication Architecture The Challenge Building direct integrations with every IDP (Keycloak, Azure AD, Okta, Google Workspace) is time-consuming, maintenance-heavy, and complex due to protocol variations (SAML, OIDC, OAuth 2.0).\nEach tenant wants:\nTheir own identity provider Seamless user authentication Complete isolation between tenants IDP Federation Service An IDP Federation Service (like Ory Polis or Auth0) acts as a bridge between customer IDPs and your platform.\nWhen you need federation:\nEnterprise customers requiring their own IDP (Azure AD, Okta, custom SAML) Multiple tenants with different authentication providers Support for SAML, OIDC, and other protocols Automated user provisioning via SCIM When you might not need federation:\nSingle IDP for all users (e.g., only Google OAuth) Simple B2C application with email/password auth All tenants share the same authentication provider Architecture:\n┌─────────────────────────────────────────────────────────────┐ │ User Browser │ └────────────────────────┬────────────────────────────────────┘ │ 1. GET /login ▼ ┌─────────────────────────────────────────────────────────────┐ │ Web Application │ │ - Identifies tenant │ │ - Redirects to IDP Federation Service │ └────────────────────────┬────────────────────────────────────┘ │ 2. OAuth authorize request ▼ ┌─────────────────────────────────────────────────────────────┐ │ IDP Federation Service │ │ - Routes to customer\u0026#39;s IDP (tenant-based) │ │ - Issues federation tokens after authentication │ └────────────────────────┬────────────────────────────────────┘ │ 3. User authenticates ▼ ┌─────────────────────────────────────────────────────────────┐ │ Customer IDP (Keycloak, Azure AD, etc.) │ └─────────────────────────────────────────────────────────────┘ Key Benefits:\nSingle OAuth Interface: Backend integrates with one service, not multiple IDPs Multi-Tenant Support: Each customer uses their own IDP Protocol Abstraction: Federation service handles SAML, OIDC, OAuth 2.0 complexities Consistent Token Format: Uniform token structure regardless of upstream IDP Tenant Identification Tenants can be identified through various mechanisms depending on your application architecture:\nEmail domain-based: Extract tenant from user\u0026rsquo;s email domain Subdomain-based: Use subdomain as tenant identifier (e.g., acme.yourapp.com) Path-based: Include tenant in URL path (e.g., /acme/login) Header-based: Custom HTTP header containing tenant ID Database lookup: API endpoint to resolve tenant from email or other identifier Choose the approach that best fits your application\u0026rsquo;s routing and user experience requirements.\nAuthentication Flow Step 1: Tenant Identification\nApplication identifies the tenant through one of the methods above, then generates a CSRF state parameter for security.\nStep 2: OAuth Authorization\nGET /federation/api/oauth/authorize? tenant=\u0026lt;tenant-id\u0026gt;\u0026amp; product=yourapp\u0026amp; redirect_uri=https://yourapp.com/callback\u0026amp; state=\u0026lt;random-csrf-token\u0026gt;\u0026amp; response_type=code\u0026amp; scope=openid profile email groups Step 3: Token Exchange\n// App exchanges authorization code for tokens POST /federation/api/oauth/token Content-Type: application/x-www-form-urlencoded grant_type=authorization_code\u0026amp; code=\u0026lt;auth-code\u0026gt;\u0026amp; client_id=\u0026lt;id\u0026gt;\u0026amp; client_secret=\u0026lt;secret\u0026gt;\u0026amp; redirect_uri=\u0026lt;callback\u0026gt; // Response includes: access_token, id_token, refresh_token Step 4: Token Validation\n// Backend validates token via userinfo endpoint GET /federation/api/oauth/userinfo Authorization: Bearer \u0026lt;access_token\u0026gt; // Response: { \u0026#34;sub\u0026#34;: \u0026#34;user-123\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;alice@acme-corp.com\u0026#34;, \u0026#34;groups\u0026#34;: [\u0026#34;developers\u0026#34;, \u0026#34;admin\u0026#34;] } Authorization and Access Control Authorization can be implemented using various models depending on your application\u0026rsquo;s complexity and requirements:\nRole-Based Access Control (RBAC): Users have roles (admin, editor, viewer) with predefined permissions Attribute-Based Access Control (ABAC): Access decisions based on user attributes, resource attributes, and environmental conditions Group-Based Access Control: Users belong to groups that determine access (common in enterprise environments) Resource-Based Access Control: Direct user-to-resource permissions (suitable for simpler applications) This guide focuses on group-based access control since it integrates naturally with enterprise IDPs and directory services, making it practical for multi-tenant SaaS applications.\nGroup-Based Access Control How It Works:\nUsers belong to groups (from IDP token claims) Resources are restricted to specific groups Users access resources only if their groups match Example:\nUser \u0026#34;alice\u0026#34; → groups: [\u0026#34;developers\u0026#34;, \u0026#34;admin\u0026#34;] Resource \u0026#34;github-tools\u0026#34; → allowed groups: [\u0026#34;developers\u0026#34;] Result: alice can access User \u0026#34;bob\u0026#34; → groups: [\u0026#34;viewers\u0026#34;] Resource \u0026#34;github-tools\u0026#34; → allowed groups: [\u0026#34;developers\u0026#34;] Result: bob cannot access Implementation:\nfunc canAccess(userGroups, allowedGroups []string) bool { if len(allowedGroups) == 0 { return true // Public resource } for _, userGroup := range userGroups { for _, allowed := range allowedGroups { if userGroup == allowed { return true } } } return false } Tenant Isolation Critical: Users from Tenant A must never access Tenant B\u0026rsquo;s resources.\nApproach:\nExtract tenant identifier during authentication Filter all database queries by tenant_id Propagate tenant ID via application context // Middleware adds tenant to context func TenantMiddleware(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { claims := auth.ClaimsFromContext(r.Context()) tenantID := extractTenant(claims) ctx := auth.WithTenant(r.Context(), tenantID) next.ServeHTTP(w, r.WithContext(ctx)) }) } // All queries filter by tenant func GetServers(ctx context.Context) ([]Server, error) { tenantID := auth.TenantFromContext(ctx) return db.Query(\u0026#34;SELECT * FROM servers WHERE tenant_id = $1\u0026#34;, tenantID) } IDP Group Support Challenges Not all IDPs support group claims in OIDC tokens. Google Workspace and GitHub don\u0026rsquo;t provide group claims. Azure AD returns group object IDs (GUIDs), not names. Keycloak, Okta, and most enterprise IDPs support groups fully.\nA hybrid authorization model addresses these variations:\nDatabase-Backed Groups (default): For IDPs without group support (Google, GitHub)\nManage groups in your database Assign users to groups manually or via SCIM IDP Passthrough (enterprise): For IDPs with full group support (Keycloak, Azure AD, Okta)\nGroups come directly from IDP token claims Zero manual management func ResolveUserGroups(ctx context.Context, claims *Claims) ([]string, error) { tenant := getTenant(ctx) if tenant.GroupSource == \u0026#34;idp\u0026#34; { return claims.Groups, nil // Use IDP groups } return db.GetUserGroups(ctx, claims.Subject) // Use DB groups } SCIM and Directory Sync SCIM (System for Cross-domain Identity Management) is a standardized protocol for automating user provisioning and deprovisioning between systems. It enables automated provisioning (new employees get access automatically), automated deprovisioning (departing employees lose access immediately), group sync (organizational changes propagate automatically), and compliance (access control stays in sync with HR systems).\nSCIM 2.0 Resources User Resource:\n{ \u0026#34;schemas\u0026#34;: [\u0026#34;urn:ietf:params:scim:schemas:core:2.0:User\u0026#34;], \u0026#34;id\u0026#34;: \u0026#34;user-123\u0026#34;, \u0026#34;userName\u0026#34;: \u0026#34;alice@acme-corp.com\u0026#34;, \u0026#34;name\u0026#34;: {\u0026#34;givenName\u0026#34;: \u0026#34;Alice\u0026#34;, \u0026#34;familyName\u0026#34;: \u0026#34;Smith\u0026#34;}, \u0026#34;emails\u0026#34;: [{\u0026#34;value\u0026#34;: \u0026#34;alice@acme-corp.com\u0026#34;, \u0026#34;primary\u0026#34;: true}], \u0026#34;active\u0026#34;: true } Group Resource:\n{ \u0026#34;schemas\u0026#34;: [\u0026#34;urn:ietf:params:scim:schemas:core:2.0:Group\u0026#34;], \u0026#34;id\u0026#34;: \u0026#34;group-456\u0026#34;, \u0026#34;displayName\u0026#34;: \u0026#34;Developers\u0026#34;, \u0026#34;members\u0026#34;: [{\u0026#34;value\u0026#34;: \u0026#34;user-123\u0026#34;, \u0026#34;display\u0026#34;: \u0026#34;alice@acme-corp.com\u0026#34;}] } Directory Sync Architecture ┌─────────────┐ SCIM 2.0 ┌────────────────┐ │ IDP │ ────────────\u0026gt; │ SCIM Server │ │ (Azure AD, │ POST /Users │ (Federation │ │ Okta) │ POST /Groups │ Service) │ └─────────────┘ └────────┬───────┘ │ │ Polls every 5 min │ GET /api/v1/dsync/* ▼ ┌─────────────────┐ │ Sync Worker │ │ (Background) │ └────────┬────────┘ │ ▼ ┌─────────────────┐ │ Database │ │ users, groups, │ │ user_groups │ └─────────────────┘ Components:\nSCIM Server: Receives SCIM requests from IDPs, stores users/groups Sync Worker: Polls SCIM server periodically, syncs to your database Database: Stores synced users, groups, memberships IDP Support: As discussed in the Authorization section, not all IDPs support group claims equally. For SCIM directory sync: Azure AD, Okta, and Keycloak provide full support (users + groups). Google Workspace supports SCIM users reliably but has limited group provisioning (use Google Directory API for groups). GitHub doesn\u0026rsquo;t support SCIM (use OIDC tokens + custom sync).\nSCIM Endpoint Setup Create Directory Sync Configuration:\ncurl -X POST \u0026#34;https://federation.example.com/api/v1/dsync\u0026#34; \\ -H \u0026#34;Authorization: Api-Key \u0026lt;key\u0026gt;\u0026#34; \\ -H \u0026#34;Content-Type: application/json\u0026#34; \\ -d \u0026#39;{ \u0026#34;tenant\u0026#34;: \u0026#34;acme\u0026#34;, \u0026#34;product\u0026#34;: \u0026#34;yourapp\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;generic-scim-v2\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Acme Corp Directory\u0026#34; }\u0026#39; Response includes SCIM endpoint and secret:\n{ \u0026#34;scim\u0026#34;: { \u0026#34;endpoint\u0026#34;: \u0026#34;https://federation.example.com/api/scim/v2.0/\u0026lt;id\u0026gt;\u0026#34;, \u0026#34;secret\u0026#34;: \u0026#34;scim-token-xyz\u0026#34; } } Configure in IDP:\nAzure AD: Enterprise Application → Provisioning → SCIM endpoint Okta: Application → Provisioning → SCIM 2.0 Sync Worker Implementation // Sync worker runs periodically (e.g., every 5 minutes) func (s *SyncService) Start(ctx context.Context) { ticker := time.NewTicker(5 * time.Minute) defer ticker.Stop() for { select { case \u0026lt;-ctx.Done(): return case \u0026lt;-ticker.C: for _, tenant := range s.getTenantsWithSync(ctx) { s.syncUsers(ctx, tenant) s.syncGroups(ctx, tenant) s.syncMemberships(ctx, tenant) } } } } Key Requirements: The sync worker must handle pagination for large directories, implement upsert logic to create or update users and groups, ensure multi-tenant isolation by syncing each tenant independently, and include robust error handling that logs errors and continues syncing other tenants.\nToken Validation Backend services validate tokens via the userinfo endpoint:\nfunc validateToken(ctx context.Context, token string) (*Claims, error) { req, _ := http.NewRequestWithContext(ctx, \u0026#34;GET\u0026#34;, issuerURL + \u0026#34;/api/oauth/userinfo\u0026#34;, nil) req.Header.Set(\u0026#34;Authorization\u0026#34;, \u0026#34;Bearer \u0026#34; + token) resp, _ := client.Do(req) defer resp.Body.Close() var userinfo UserInfo json.NewDecoder(resp.Body).Decode(\u0026amp;userinfo) return \u0026amp;Claims{ Subject: userinfo.Sub, Email: userinfo.Email, Groups: userinfo.Groups, }, nil } Token validation doesn\u0026rsquo;t require client credentials. The federation service validates the token and returns user information, following the standard OAuth 2.0 pattern for resource servers.\nAudience Validation for Multi-Tenant In multi-tenant setups, each tenant has a different client_id, making static audience validation impractical. Instead, configure the authentication to skip strict audience validation:\nauth: provider: \u0026#34;federation-service\u0026#34; issuer: \u0026#34;https://federation.example.com\u0026#34; strict_audience_validation: false # Multi-tenant mode jwks_path: \u0026#34;/oauth/jwks\u0026#34; What gets validated:\nToken signature (via JWKS) Issuer (must match trusted federation service) Expiration (token must be valid) Audience (skipped for multi-tenant) Security: Signature + Issuer validation is sufficient when using a trusted IDP federation service. The federation service manages client registration, preventing unauthorized access.\nCSRF Protection The state parameter prevents cross-site request forgery attacks by ensuring the OAuth callback came from your application:\n// Generate and store state parameter state := uuid.New().String() session.Set(\u0026#34;oauth_state\u0026#34;, state) // Include in authorization URL authURL := fmt.Sprintf(\u0026#34;%s?state=%s\u0026amp;...\u0026#34;, federationURL, state) // Validate in callback savedState := session.Get(\u0026#34;oauth_state\u0026#34;) if savedState != receivedState { return errors.New(\u0026#34;CSRF attack detected\u0026#34;) } Data Models Store synced users, groups, and memberships in your database with proper tenant isolation:\nUser Model:\ntype User struct { ID string `db:\u0026#34;id\u0026#34;` TenantID string `db:\u0026#34;tenant_id\u0026#34;` ExternalID string `db:\u0026#34;external_id\u0026#34;` // SCIM ID Email string `db:\u0026#34;email\u0026#34;` Active bool `db:\u0026#34;active\u0026#34;` CreatedAt time.Time `db:\u0026#34;created_at\u0026#34;` UpdatedAt time.Time `db:\u0026#34;updated_at\u0026#34;` } Group Model:\ntype Group struct { ID string `db:\u0026#34;id\u0026#34;` TenantID string `db:\u0026#34;tenant_id\u0026#34;` ExternalID string `db:\u0026#34;external_id\u0026#34;` // SCIM ID DisplayName string `db:\u0026#34;display_name\u0026#34;` CreatedAt time.Time `db:\u0026#34;created_at\u0026#34;` UpdatedAt time.Time `db:\u0026#34;updated_at\u0026#34;` } User-Group Membership:\ntype UserGroup struct { UserID string `db:\u0026#34;user_id\u0026#34;` GroupID string `db:\u0026#34;group_id\u0026#34;` } Conclusion Building multi-tenant authentication requires balancing flexibility with security. Federation services simplify IDP integrations, group-based authorization works well with enterprise directories, and SCIM automates user provisioning. Not all IDPs support group claims equally, so a hybrid approach (database-backed groups for some, IDP passthrough for others) provides the best coverage.\nSecurity isn\u0026rsquo;t a one-time thing - it needs ongoing attention, monitoring, and updates as threats evolve. The patterns here provide a solid foundation, but adapt them to your specific requirements and security posture.\nResources RFC 6749 - OAuth 2.0 Authorization Framework OpenID Connect Core 1.0 RFC 7644 - SCIM Protocol Ory Polis Documentation This guide is intended for educational purposes. Always consult official specifications (RFCs) and security best practices for production implementations.\n","permalink":"https://yinebebt.com/post/multi-tenant-auth-oauth-oidc-scim/","summary":"\u003cp\u003eIn SaaS applications, authentication and authorization are critical. As your platform grows to serve multiple customers, each tenant wants to use their own identity provider (IDP), users need automatic provisioning from corporate directories, and access control must work across tenants.\u003c/p\u003e","title":"Building Authentication, Authorization, and Directory Sync: A Practical Guide"},{"content":"When building services that need to run reliably, you face three fundamental problems: managing secrets securely, deploying workloads consistently, and enabling services to find each other. HashiCorp\u0026rsquo;s stack addresses these with Vault, Nomad, and Consul. This guide shows how to set up and integrate all three based on what I learned while building infrastructure for distributed services.\nWhy This Stack? Before diving into configurations, let me explain why I chose this combination. I was looking for tools that could handle production workloads without the complexity overhead of larger platforms.\nVault solves secrets management. Instead of scattering API keys and credentials across configuration files, Vault centralizes them with access control and audit logging. You can version secrets, rotate them automatically, and revoke access when needed.\nNomad handles workload orchestration. It\u0026rsquo;s simpler than Kubernetes but still provides scheduling, automatic restarts, and resource allocation. If a container fails, Nomad restarts it. If a node goes down, Nomad reschedules the workloads elsewhere.\nConsul enables service discovery. When you deploy multiple instances of a service, Consul tracks where they are and provides DNS-based lookups. Your applications query api.service.consul instead of hardcoded IP addresses.\nTogether, they form a complete platform: Vault manages credentials, Nomad runs the workloads, and Consul helps them find each other.\nThe Three Core Components Vault: Secrets Management Vault stores sensitive data like database passwords, API tokens, and certificates. Instead of environment variables or config files, applications authenticate to Vault and request secrets at runtime.\nThe key concept is policies. You define who can access which secrets:\npath \u0026#34;secret/data/dev/*\u0026#34; { capabilities = [\u0026#34;create\u0026#34;, \u0026#34;read\u0026#34;, \u0026#34;update\u0026#34;, \u0026#34;delete\u0026#34;, \u0026#34;list\u0026#34;] } This policy allows full access to anything under secret/data/dev/. The path structure is important-Vault\u0026rsquo;s KV version 2 engine uses secret/data/ as the API path, even though CLI commands use secret/.\nVault also provides versioning. Every time you update a secret, it creates a new version while keeping the old ones:\nvault kv put secret/dev/api-key key=v1 vault kv put secret/dev/api-key key=v2 vault kv get -version=1 secret/dev/api-key # Retrieves v1 Nomad: Workload Orchestration Nomad schedules and runs your applications. You define jobs in HCL that specify resource requirements, restart policies, and health checks. Nomad handles placement across your cluster.\nKey concepts in a job definition:\nResources: CPU and memory limits ensure fair allocation across the cluster Network: Port mapping-Nomad assigns dynamic host ports that map to container ports Restart policies: Define how Nomad handles failures Service blocks: Enable automatic registration with Consul for service discovery We\u0026rsquo;ll see a complete job definition when we deploy a real workload below.\nConsul: Service Discovery Consul maintains a real-time registry of services. When Nomad starts a task with a service block, it automatically registers with Consul. Other services can then discover it via DNS or HTTP API.\nDNS queries follow this pattern:\n# Basic service lookup dig @localhost -p 8600 nginx-test.service.consul # With specific datacenter dig @localhost -p 8600 nginx-test.service.dc1.consul # Tagged services dig @localhost -p 8600 web.nginx-test.service.consul Consul also performs health checks. If a service fails its health check, Consul marks it unhealthy and removes it from DNS results.\nHow They Work Together The integration happens at multiple levels:\nNomad ↔ Consul: Nomad jobs declare services that automatically register with Consul. Health checks in the job definition become Consul health checks.\nNomad ↔ Vault: Nomad can authenticate to Vault and inject secrets into tasks as environment variables or files.\nApplications ↔ All Three: Your application uses Consul for service discovery, retrieves credentials from Vault, and runs as a Nomad task.\nHere\u0026rsquo;s a concrete example. An API service needs to:\nStore its database password in Vault Run as a Nomad job with 3 replicas Register in Consul so other services can find it The workflow:\nStore database credentials in Vault Define a Nomad job that retrieves those credentials Include a service block so Consul can track instances Other services discover the API via api.service.consul Hands-On Setup Let me walk through setting up all three components together.\nStarting Vault For development, Vault\u0026rsquo;s dev mode keeps everything in memory:\nvault server -dev -dev-listen-address=\u0026#34;0.0.0.0:8200\u0026#34; This outputs a root token and unseal key. In production, you\u0026rsquo;d initialize Vault properly and distribute unseal keys, but for learning, dev mode works fine.\nConfigure authentication and policies:\nexport VAULT_ADDR=\u0026#34;http://localhost:8200\u0026#34; vault login token=$ROOT_TOKEN # Enable userpass authentication vault auth enable userpass # Create a policy for developers vault policy write dev-policy - \u0026lt;\u0026lt;EOF path \u0026#34;secret/data/dev/*\u0026#34; { capabilities = [\u0026#34;create\u0026#34;, \u0026#34;read\u0026#34;, \u0026#34;update\u0026#34;, \u0026#34;delete\u0026#34;, \u0026#34;list\u0026#34;] } EOF # Create a user vault write auth/userpass/users/developer \\ password=\u0026#34;devpass\u0026#34; \\ policies=\u0026#34;dev-policy\u0026#34; Now non-root users can authenticate and access secrets within their permissions.\nNomad-Consul Integration The integration requires specific configuration. Here\u0026rsquo;s the Consul config:\ndatacenter = \u0026#34;dc1\u0026#34; data_dir = \u0026#34;/tmp/consul/data\u0026#34; server = true bootstrap_expect = 1 bind_addr = \u0026#34;0.0.0.0\u0026#34; ui_config { enabled = true } client_addr = \u0026#34;0.0.0.0\u0026#34; ports { dns = 8600 http = 8500 grpc = 8502 } connect { enabled = true } The key points:\nbind_addr uses a specific IP instead of localhost for cross-service communication dns = 8600 enables service discovery queries connect.enabled allows service mesh features Nomad configuration includes Consul integration:\ndata_dir = \u0026#34;/tmp/nomad/data\u0026#34; bind_addr = \u0026#34;0.0.0.0\u0026#34; server { enabled = true bootstrap_expect = 1 } client { enabled = true host_volume \u0026#34;docker_sock\u0026#34; { path = \u0026#34;/var/run/docker.sock\u0026#34; read_only = false } } consul { address = \u0026#34;localhost:8500\u0026#34; auto_advertise = true server_auto_join = true client_auto_join = true } The consul block enables automatic service registration. When you deploy a job with a service stanza, Nomad registers it with Consul automatically.\nStarting Nomad and Consul Start both services in the background using their configuration files:\n# Start Consul agent consul agent -config-file=consul.hcl \u0026gt; /tmp/consul.log 2\u0026gt;\u0026amp;1 \u0026amp; # Start Nomad agent nomad agent -config=nomad.hcl \u0026gt; /tmp/nomad.log 2\u0026gt;\u0026amp;1 \u0026amp; Verify they\u0026rsquo;re running by checking their status endpoints:\ncurl -s http://localhost:8500/v1/status/leader # Consul curl -s http://localhost:4646/v1/status/leader # Nomad Access the web UIs at http://localhost:8500 (Consul) and http://localhost:4646 (Nomad).\nDeploying a Real Workload Let\u0026rsquo;s deploy an nginx service with full integration:\njob \u0026#34;nginx-test\u0026#34; { datacenters = [\u0026#34;dc1\u0026#34;] type = \u0026#34;service\u0026#34; group \u0026#34;web\u0026#34; { count = 1 network { port \u0026#34;http\u0026#34; { to = 80 } } service { name = \u0026#34;nginx-test\u0026#34; tags = [\u0026#34;web\u0026#34;, \u0026#34;frontend\u0026#34;] port = \u0026#34;http\u0026#34; provider = \u0026#34;consul\u0026#34; check { type = \u0026#34;http\u0026#34; path = \u0026#34;/\u0026#34; interval = \u0026#34;10s\u0026#34; timeout = \u0026#34;3s\u0026#34; } } task \u0026#34;nginx\u0026#34; { driver = \u0026#34;docker\u0026#34; config { image = \u0026#34;nginx:alpine\u0026#34; ports = [\u0026#34;http\u0026#34;] } resources { cpu = 200 memory = 256 } } restart { attempts = 2 interval = \u0026#34;30m\u0026#34; delay = \u0026#34;15s\u0026#34; mode = \u0026#34;fail\u0026#34; } } } Key components:\nService block: Registers with Consul automatically Health check: HTTP check on path / every 10 seconds Restart policy: Attempts 2 restarts within 30 minutes Resources: Limits CPU and memory usage Deploy it:\nexport NOMAD_ADDR=http://localhost:4646 nomad job run nginx-job.hcl Verify the deployment:\n# Check Nomad job status nomad job status nginx-test # DNS lookup dig @localhost -p 8600 nginx-test.service.consul # Health check curl -s http://localhost:8500/v1/health/service/nginx-test | jq The service is now discoverable via DNS at nginx-test.service.consul.\nIntegrating with Vault Applications can retrieve secrets from Vault programmatically. Here\u0026rsquo;s a minimal example using the Vault Go client:\nimport \u0026#34;github.com/hashicorp/vault/api\u0026#34; // Create Vault client config := api.DefaultConfig() config.Address = \u0026#34;http://localhost:8200\u0026#34; client, _ := api.NewClient(config) client.SetToken(os.Getenv(\u0026#34;VAULT_TOKEN\u0026#34;)) // Read secret secret, _ := client.Logical().Read(\u0026#34;secret/data/dev/database\u0026#34;) data := secret.Data[\u0026#34;data\u0026#34;].(map[string]interface{}) // Use credentials dbUser := data[\u0026#34;username\u0026#34;].(string) dbPass := data[\u0026#34;password\u0026#34;].(string) The pattern is straightforward: authenticate with a token, read from the secret path, and extract the data. This centralizes secret management-rotating credentials requires only updating Vault, not redeploying code.\nProduction and Next Steps While dev mode works for learning, production requires additional hardening. Here are the key considerations and next steps:\nInitialize Vault with proper seal/unseal using Shamir\u0026rsquo;s Secret Sharing Use AppRole or JWT authentication instead of static tokens Enable TLS for all communication across all three tools Deploy multi-node clusters (3-5 servers) for fault tolerance Enable ACLs with default-deny policies Set resource quotas to prevent cluster exhaustion Conclusion HashiCorp\u0026rsquo;s stack provides a complete foundation for running services in production. Vault manages secrets securely, Nomad orchestrates workloads reliably, and Consul enables service discovery automatically.\nThe integration between these tools reduces operational complexity. Services register themselves, credentials are injected securely, and DNS-based discovery works out of the box.\nThis setup handles the fundamental infrastructure problems so you can focus on building your applications instead of managing configuration files and manual deployments.\n","permalink":"https://yinebebt.com/post/hashicorp-stack/","summary":"\u003cp\u003eWhen building services that need to run reliably, you face three fundamental problems: managing secrets securely, deploying workloads consistently, and enabling services to find each other. HashiCorp\u0026rsquo;s stack addresses these with Vault, Nomad, and Consul. This guide shows how to set up and integrate all three based on what I learned while building infrastructure for distributed services.\u003c/p\u003e","title":"Building a Service Platform with HashiCorp: Vault, Nomad, and Consul"},{"content":"As software engineers, we often work with systems that abstract away the fundamental concepts of operating systems. We deploy containers to Kubernetes, scale web services, and optimize database queries without thinking deeply about the underlying resource management. Yet understanding these foundations becomes crucial when we hit performance walls or design systems that need to handle thousands of concurrent operations efficiently.\nThis guide covers CPU scheduling algorithms for systems engineers, backend developers, and DevOps professionals. These concepts help when optimizing microservice architectures, debugging performance issues, or working with distributed systems.\nThis post explores CPU scheduling algorithms, covered in Andrew Tanenbaum\u0026rsquo;s \u0026ldquo;Modern Operating Systems\u0026rdquo; (Chapter 2.5: SCHEDULING), through practical implementation. We\u0026rsquo;ll look at how computers decide which process gets to run when.\nWhy CPU Scheduling Matters in Modern Systems When I first learned about CPU scheduling in class, it felt like an academic exercise. After all, operating systems handle this automatically, right? Years later, working with distributed systems, I realized these principles are everywhere.\nConsider what happens when you deploy a microservice to Kubernetes. The Kubernetes scheduler must decide which node gets your pod, balancing resource requests, current load, and affinity rules. This is fundamentally the same problem as CPU scheduling - multiple entities competing for limited resources, with the system making allocation decisions. When Nginx load balancer handles incoming HTTP requests, it distributes them among worker processes using patterns remarkably similar to Round Robin scheduling.\nThe fundamental challenge that drove the need of CPU scheduling algorithms remains unchanged: efficiently allocating limited resources among competing processes while maintaining fairness, responsiveness, and system stability. What has changed is the scale and complexity - instead of managing a handful of processes on a single machine, we\u0026rsquo;re orchestrating thousands of containers across distributed clusters. But the core principles? They\u0026rsquo;re more relevant than ever.\nUnderstanding these algorithms helps when reasoning about resource contention. When applications become slow, certain requests get delayed, or Kubernetes pods get stuck in pending state, these are often scheduling decisions at work. Understanding the trade-offs helps with diagnosis and optimization.\nUnderstanding the Core Problem Before diving into specific algorithms, let\u0026rsquo;s establish why CPU scheduling exists at all. A computer from the early days of computing machines typically ran one program at a time from start to finish. If your program needed to read data from a tape drive (which might take several seconds), the entire CPU would sit idle, waiting. Those expensive CPU cycles were simply wasted.\nThis inefficiency led to the development of multiprogramming - the ability to keep multiple programs in memory and switch between them. When one program waits for I/O, another can use the CPU. But this creates a new problem: which program should run next? This decision is what we call CPU scheduling.\nModern computers have evolved this concept to an extreme degree. Your laptop might be running hundreds of processes simultaneously - browser with dozens of tabs, a music player, background system services, development tools, and more. Each process believes it has the CPU to itself, but in reality, the operating system is rapidly switching between them, giving each a tiny slice of time before moving on to the next.\nThe CPU scheduler makes thousands of decisions per second about which process gets CPU time. These decisions impact application response times, resource utilization, and fairness between processes.\nThe Fundamental Metrics To understand whether a scheduling algorithm is working well, we need to measure its performance. There are several key metrics that capture different aspects of user experience and system efficiency.\nTurnaround Time represents the total time a process spends in the system, from the moment it arrives in the ready queue until it completely finishes execution. This metric captures the user\u0026rsquo;s perception of how long their job takes to complete. If you submit a task that requires 5 seconds of CPU time, but it takes 15 seconds to complete due to waiting for other processes, your turnaround time is 15 seconds. This metric is crucial for batch processing systems where users submit jobs and wait for results.\nWaiting Time measures how long a process spends sitting in the ready queue, waiting for its turn on the CPU. This is calculated as the turnaround time minus the actual CPU time needed (burst time). High waiting times indicate that processes are spending too much time idle, which generally leads to poor user experience and inefficient resource utilization.\nResponse Time captures how quickly a process gets its first chance to run after arriving in the system. This metric is particularly important for interactive systems where users expect immediate feedback. Think about typing in a text editor - you want to see characters appear instantly, not after waiting for other processes to finish their work.\nThe mathematical relationships between these metrics tell an important story:\nTurnaround Time = Completion Time - Arrival Time Waiting Time = Turnaround Time - Burst Time Response Time = Start Time - Arrival Time Where Arrival Time is when the process enters the ready queue, Burst Time is the total CPU time the process needs, Start Time is when it first gets the CPU, and Completion Time is when it finishes execution.\nDifferent scheduling algorithms optimize for different metrics. Some minimize average waiting time, others focus on response time, and some try to balance multiple concerns. Understanding these trade-offs is key to choosing the right approach for different scenarios.\nThe Three Fundamental Scheduling Algorithms Now let\u0026rsquo;s explore the three fundamental scheduling algorithms that form the foundation of modern process management. Each represents a different philosophy about how to fairly and efficiently allocate CPU time.\nFirst Come First Serve (FCFS) First Come First Serve is exactly what it sounds like - processes are executed in the order they arrive, just like customers being served at a bank. The first process to enter the ready queue gets the CPU first, runs to completion, then the second process runs, and so on.\nFCFS is a non-preemptive algorithm, meaning once a process starts executing, it runs until it either completes or voluntarily gives up the CPU (such as when waiting for I/O). The operating system doesn\u0026rsquo;t interrupt a running process to give CPU time to another process. This simplicity makes FCFS incredibly easy to understand and implement - you just need a first-in-first-out (FIFO) queue.\nThe appeal of FCFS lies in its fairness from a temporal perspective. No process gets special treatment; everyone waits their turn based on when they arrived. There\u0026rsquo;s no possibility of starvation - every process will eventually get CPU time. This makes it suitable for scenarios where fairness is more important than efficiency, such as batch processing systems where jobs are submitted throughout the day and should be processed in order.\nHowever, FCFS suffers from a significant problem known as the convoy effect. Imagine a scenario where a long-running process (say, one that needs 100 seconds of CPU time) arrives first, followed by several short processes (each needing just 1 second). All the short processes must wait for the long process to complete before they can run. This means processes that could have been completed quickly end up waiting unnecessarily long times.\nThis convoy effect makes FCFS particularly poor for interactive systems. Users typing in applications, clicking buttons, or performing other real-time activities expect immediate responses. If their interactive processes get stuck behind long-running background tasks, the system feels unresponsive and sluggish.\nShortest Job First (SJF) Shortest Job First takes a radically different approach: instead of considering arrival order, it prioritizes processes based on how much CPU time they need. The process with the shortest burst time gets to run first, regardless of when it arrived.\nSJF is also typically non-preemptive - once a process starts running, it completes before the next shortest job is selected. This algorithm has a remarkable theoretical property: it provably minimizes the average waiting time for a given set of processes. No other non-preemptive scheduling algorithm can achieve better average waiting time performance.\nThe mathematical intuition behind this optimality is elegant. When you execute shorter jobs first, you reduce the waiting time for more processes. Consider two jobs: one requiring 1 second and another requiring 10 seconds. If you run the 10-second job first, the 1-second job waits 10 seconds. But if you run the 1-second job first, the 10-second job waits only 1 second. The total waiting time is minimized.\nThis makes SJF particularly attractive for batch processing systems where efficiency is paramount. If you have a queue of jobs submitted overnight and want to minimize the average completion time, SJF is theoretically optimal.\nHowever, SJF faces several practical challenges. The most obvious is the prediction problem - how do you know how long a process will run before it actually runs? Operating systems must estimate burst times based on historical behavior, but these estimates are often inaccurate, especially for interactive applications with variable workloads.\nEven more serious is the starvation problem. If short jobs keep arriving, longer jobs might never get a chance to run. Consider this concrete example: a 10-second job arrives at time 0, but then 1-second jobs arrive at times 1, 3, 5, 7, 9, 11, 13, etc. Under pure SJF, the 10-second job would never execute because there\u0026rsquo;s always a shorter job available. After 20 seconds, the long job would still be waiting while 10 short jobs have completed - a clear demonstration of starvation in action.\nRound Robin (RR)-Time-Sharing Revolution Round Robin introduced a fundamentally different concept: preemption. Instead of letting processes run to completion, RR gives each process a fixed amount of time called a time quantum or time slice. When a process\u0026rsquo;s quantum expires, it\u0026rsquo;s forcibly removed from the CPU and placed at the end of the ready queue, allowing the next process to run.\nThis preemptive approach was revolutionary because it solved many problems with earlier algorithms. No single process can monopolize the CPU for extended periods. Long-running processes can\u0026rsquo;t block short interactive tasks. Every process gets regular opportunities to make progress, creating the illusion that all processes are running simultaneously.\nThe time quantum is typically quite small - modern systems often use values between 10 and 100 milliseconds. This creates the user perception of true multitasking. When you\u0026rsquo;re typing in a word processor while music plays in the background and a file download progresses, Round Robin scheduling makes all these activities appear to happen simultaneously.\nRR excels at response time - the time from when a process arrives until it first gets CPU access. Since no process can run for more than one quantum before other processes get their turn, newly arrived processes quickly get their first chance to execute. This makes RR ideal for interactive systems where user responsiveness is critical.\nHowever, preemption comes with costs. Every time the system switches from one process to another, it must perform a context switch - saving the current process\u0026rsquo;s state (registers, memory mappings, etc.) and loading the next process\u0026rsquo;s state. These operations consume CPU cycles and memory bandwidth, representing pure overhead that doesn\u0026rsquo;t advance any process\u0026rsquo;s work.\nThe choice of time quantum creates an interesting trade-off. A very small quantum provides excellent response time and fairness but increases context switching overhead. A very large quantum reduces overhead but approaches the behavior of FCFS, potentially creating responsiveness problems. Finding the optimal quantum size requires balancing these competing concerns based on the system\u0026rsquo;s workload characteristics.\nBuilding a CPU Scheduler Simulator Now that we understand the theory, let\u0026rsquo;s build a CPU scheduling simulator. This implementation helps us see how the algorithms behave and compare their performance.\nThe implementation uses a Process struct to track timing information and a Scheduler interface that each algorithm implements. This lets us compare their performance on the same workload.\nThe Process struct contains all the timing information we need to calculate key metrics:\ntype Process struct { ID int // Process identifier ArrivalTime int // When process enters ready queue BurstTime int // Total CPU time required StartTime int // When process first gets CPU CompletionTime int // When process completes execution WaitingTime int // Total time spent waiting TurnaroundTime int // Total time from arrival to completion ResponseTime int // Time from arrival to first CPU allocation RemainingTime int // For preemptive algorithms } The ArrivalTime and BurstTime are inputs that define the workload. The other fields are calculated by the scheduling algorithms. The Scheduler interface provides a contract for each algorithm:\ntype Scheduler interface { Schedule(processes []Process) SchedulingResult Name() string } FCFS Implementation The FCFS implementation is straightforward:\nfunc (f *FCFSScheduler) Schedule(processes []Process) SchedulingResult { // Create a copy to avoid modifying original data procs := make([]Process, len(processes)) copy(procs, processes) // Sort by arrival time - the core of FCFS sort.Slice(procs, func(i, j int) bool { return procs[i].ArrivalTime \u0026lt; procs[j].ArrivalTime }) currentTime := 0 for i := range procs { // Handle CPU idle time if currentTime \u0026lt; procs[i].ArrivalTime { currentTime = procs[i].ArrivalTime } // Process runs immediately when selected procs[i].StartTime = currentTime currentTime += procs[i].BurstTime procs[i].CompletionTime = currentTime // Calculate metrics procs[i].TurnaroundTime = procs[i].CompletionTime - procs[i].ArrivalTime procs[i].WaitingTime = procs[i].TurnaroundTime - procs[i].BurstTime procs[i].ResponseTime = procs[i].StartTime - procs[i].ArrivalTime } } FCFS is simple to implement - just sort by arrival time and execute in order. The algorithm handles cases where the CPU might be idle and calculates timing metrics.\nSJF Implementation: Dynamic Selection SJF requires more sophisticated logic because we must dynamically select the shortest available job at each decision point:\nfunc (s *SJFScheduler) Schedule(processes []Process) SchedulingResult { // Track which processes have completed completed := make([]bool, len(processes)) currentTime := 0 for completedCount := 0; completedCount \u0026lt; len(processes); { // Find all available processes availableProcesses := []int{} for i := range processes { if processes[i].ArrivalTime \u0026lt;= currentTime \u0026amp;\u0026amp; !completed[i] { availableProcesses = append(availableProcesses, i) } } if len(availableProcesses) == 0 { // CPU is idle - advance to next arrival currentTime = findNextArrival(processes, completed, currentTime) continue } // Select shortest job among available processes shortestIdx := availableProcesses[0] for _, idx := range availableProcesses { if processes[idx].BurstTime \u0026lt; processes[shortestIdx].BurstTime { shortestIdx = idx } } // Execute the selected process executeProcess(processes, shortestIdx, currentTime) completed[shortestIdx] = true completedCount++ } } At each decision point, we examine all processes that have arrived and choose the one with the shortest burst time. The algorithm handles scenarios where the CPU might be idle waiting for processes to arrive.\nRound Robin Implementation Round Robin is more complex because it must handle preemption, queue management, and new process arrivals during execution:\nfunc (r *RoundRobinScheduler) Schedule(processes []Process) SchedulingResult { // Initialize remaining times for i := range processes { processes[i].RemainingTime = processes[i].BurstTime } currentTime := 0 readyQueue := []int{} for allProcessesComplete() { // Add newly arrived processes addArrivedProcesses(processes, currentTime, \u0026amp;readyQueue) if len(readyQueue) == 0 { currentTime = findNextArrival(processes, currentTime) continue } // Get next process from queue currentProcess := readyQueue[0] readyQueue = readyQueue[1:] // Calculate execution time for this quantum execTime := min(r.TimeQuantum, processes[currentProcess].RemainingTime) // Execute for quantum or until completion currentTime += execTime processes[currentProcess].RemainingTime -= execTime // Handle process completion or requeuing if processes[currentProcess].RemainingTime == 0 { processes[currentProcess].CompletionTime = currentTime } else { readyQueue = append(readyQueue, currentProcess) } // Add any processes that arrived during execution addArrivedDuringExecution(processes, currentTime-execTime, currentTime, \u0026amp;readyQueue) } } The Round Robin implementation manages the ready queue, adding newly arrived processes and placing preempted processes back in the queue.\nThe complete source code for all implementations is available in the tutorials repository as cpu-scheduler.go.\nPerformance Analysis: Comparing Algorithm Effectiveness Let\u0026rsquo;s analyze the scheduling algorithms using a workload that represents common scenarios. The test processes include long-running batch jobs, medium interactive tasks, and short system processes:\nProcess Set:\nP1 (Arrival=0, Burst=8): A long-running CPU-bound task, similar to a video encoding job or scientific computation P2 (Arrival=1, Burst=4): A medium interactive task, like a user application responding to input P3 (Arrival=2, Burst=9): A long batch job, such as a database backup or large file processing P4 (Arrival=3, Burst=5): A medium background task, like system maintenance or log processing P5 (Arrival=4, Burst=2): A short interactive task, such as a quick file operation or system query This workload shows the strengths and weaknesses of each algorithm. The mix of arrival times tests how algorithms handle process queuing, and the variety of burst times shows how they deal with jobs of different lengths.\nDetailed Performance Comparison Running the simulation shows how each algorithm behaves:\nAlgorithm Avg Waiting Time Avg Turnaround Time Avg Response Time Context Switches FCFS 11.40 17.00 11.40 4 SJF 8.20 13.80 8.20 4 Round Robin (Q=3) 14.20 19.80 6.80 9 These numbers show the trade-offs in CPU scheduling.\nSJF\u0026rsquo;s Mathematical Optimality in Practice: SJF achieves the lowest average waiting time (8.20) and turnaround time (13.80), confirming its theoretical optimality. By executing shorter jobs first, it minimizes the total time processes spend waiting. In my workload, the short 2-second process (P5) and medium 4-second process (P2) get prioritized over the longer 8 and 9-second processes, reducing overall system wait time.\nHowever, SJF\u0026rsquo;s response time equals its waiting time (8.20), meaning some processes experience significant delays before their first CPU access. Process P3, despite arriving at time 2, doesn\u0026rsquo;t start until much later because shorter processes keep getting prioritized. This demonstrates the starvation problem in action.\nRound Robin\u0026rsquo;s Response Time Advantage: Round Robin achieves better response time (6.80 vs 11.40 for FCFS), which is why this algorithm works well for interactive systems. Every process gets CPU access within one quantum of arriving.\nThe trade-off becomes apparent in the context switch count - Round Robin performs 9 context switches compared to just 4 for the non-preemptive algorithms. Each context switch represents overhead: time spent saving and restoring process state instead of doing useful work. In real systems, this overhead can be significant, especially with very small quantum sizes.\nInterestingly, Round Robin shows higher waiting (14.20) and turnaround times (19.80) compared to SJF, demonstrating that responsiveness comes at the cost of overall efficiency.\nFCFS\u0026rsquo;s Convoy Effect Demonstration: FCFS shows the poorest performance overall, with the highest waiting and response times. The long process P1 arriving first creates a convoy effect - all subsequent processes must wait for it to complete before getting any CPU time. Process P5, which needs only 2 seconds of CPU time, must wait over 20 seconds to even start executing.\nUnderstanding the Execution Patterns The Gantt charts generated by the simulator reveal the execution patterns that create these performance differences:\nFCFS Execution Pattern:\n| P1 | P2 | P3 | P4 |P5| 0 8 12 21 26 28 The convoy effect is visually obvious - P1\u0026rsquo;s long execution blocks all other processes, creating increasingly long wait times for later arrivals.\nSJF Execution Pattern:\n| P1 |P5| P2 | P4 | P3 | 0 8 10 14 19 28 SJF reorders execution to run shorter jobs first, dramatically reducing average wait time but potentially delaying longer processes significantly.\nRound Robin Execution Pattern (Quantum=3):\n|P1 |P2 |P1 |P3 |P4 |P1|P5|P2|P3 |P4|P3 | 0 3 6 9 12 15 18 19 22 25 28 Round Robin interleaves process execution, giving everyone regular CPU access but requiring many context switches to achieve this fairness.\nReal-World Applications: Where These Principles Live Today Understanding these scheduling fundamentals provides powerful insights into modern systems that might not obviously appear related to CPU scheduling. Let\u0026rsquo;s explore how these principles manifest in contemporary technology.\nContainer Orchestration: Kubernetes as a Macro-Scheduler Kubernetes fundamentally implements a sophisticated version of priority-based scheduling at the cluster level. When you submit a pod for deployment, the Kubernetes scheduler must decide which node should run your container - a decision remarkably similar to a CPU scheduler choosing which process to run next.\nConsider this Kubernetes deployment configuration:\napiVersion: v1 kind: Pod spec: containers: - name: app resources: requests: cpu: \u0026#34;100m\u0026#34; # Equivalent to burst time estimate memory: \u0026#34;128Mi\u0026#34; limits: cpu: \u0026#34;500m\u0026#34; # Maximum allowed CPU usage memory: \u0026#34;512Mi\u0026#34; priorityClassName: high-priority # Like process priority The cpu request serves as an estimate of the \u0026ldquo;burst time\u0026rdquo; the container needs, similar to SJF\u0026rsquo;s job length predictions. The priorityClassName implements priority scheduling, ensuring critical workloads get preferential treatment over less important background tasks. When multiple pods compete for limited node resources, Kubernetes makes scheduling decisions that balance efficiency (packing pods efficiently onto nodes) with fairness (ensuring all pods eventually get resources).\nThe parallel becomes even more apparent when examining Kubernetes\u0026rsquo; handling of resource contention. If a node becomes overloaded, Kubernetes might evict lower-priority pods to make room for higher-priority ones - essentially implementing preemptive priority scheduling at the cluster level.\nWeb Server Architecture: Request Scheduling in Practice Modern web servers like Nginx implement request handling patterns that directly mirror CPU scheduling algorithms. When Nginx receives multiple concurrent HTTP requests, it must decide how to allocate worker processes among them.\nNginx\u0026rsquo;s default configuration uses a Round Robin-like approach among worker processes, ensuring no single request monopolizes system resources. This prevents the convoy effect I saw with FCFS - a slow request (like one that requires database queries or external API calls) doesn\u0026rsquo;t block faster requests (like serving static files).\nMore sophisticated load balancers implement weighted Round Robin (similar to priority scheduling) or least-connections algorithms (similar to SJF, favoring workers with shorter queues). These decisions directly impact user experience metrics like response time and throughput.\nDatabase Query Planning: SJF in Action Database systems like PostgreSQL face scheduling decisions remarkably similar to CPU scheduling examples. Query planners must balance short transactional queries against long-running analytical workloads, making decisions that mirror the trade-offs between SJF and Round Robin.\nMany databases implement query prioritization systems that favor shorter queries, effectively implementing SJF-like behavior. OLTP (Online Transaction Processing) queries typically get priority over OLAP (Online Analytical Processing) queries because they\u0026rsquo;re shorter and users expect immediate responses. However, pure SJF would cause starvation of analytical queries, so databases often implement sophisticated priority schemes that ensure long-running queries eventually execute.\nSome databases use time-slicing for expensive queries, allowing them to be paused periodically to let shorter queries execute - a direct implementation of Round Robin scheduling at the query level.\nPerformance Tuning Insights: Applying Scheduling Wisdom Understanding CPU scheduling principles provides practical guidance for optimizing real systems. Here are key insights that translate directly to modern performance engineering:\nQuantum Size Optimization: Just as Round Robin performance depends on choosing the right time quantum, many systems have similar tuning parameters. Web server worker process counts, database connection pool sizes, and thread pool configurations all represent quantum-like decisions that balance responsiveness against overhead.\nContext Switch Overhead: Our simulation shows Round Robin using 9 context switches compared to 4 for non-preemptive algorithms. In real systems, this translates to cache misses, TLB flushes, and pipeline stalls. Modern applications should consider this when designing concurrent systems - sometimes batching work is more efficient than immediate responsiveness.\nWorkload Characterization: The effectiveness of different scheduling algorithms depends heavily on workload characteristics. Systems with mostly short, similar-length tasks benefit from Round Robin-like approaches, while systems with predictable, varied-length jobs might benefit from SJF-like prioritization.\nThe Response Time vs. Throughput Trade-off: Our results clearly show this classic trade-off - Round Robin achieves the best response time (6.80) but worst waiting time (14.20). This principle applies to web servers, databases, and distributed systems where you must choose between quick responses and overall system efficiency.\nStarvation Prevention: Real systems must guard against starvation just like scheduling algorithms. API rate limiting, queue depth limits, and priority aging mechanisms all implement concepts designed to ensure fairness and prevent resource monopolization.\nAdvanced Concepts: Beyond Basic Scheduling While the implementation covers the fundamental algorithms, modern operating systems implement far more sophisticated approaches. Understanding these advanced concepts helps bridge the gap between academic scheduling and production systems.\nMultilevel Queue Scheduling partitions processes into different priority classes, each with its own scheduling algorithm. Interactive processes might use Round Robin for responsiveness, while batch processes use FCFS for simplicity. This hybrid approach allows systems to optimize for different workload characteristics simultaneously.\nMultilevel Feedback Queue Scheduling goes further by allowing processes to move between queues based on their behavior. A process that uses its full quantum (indicating CPU-intensive work) might be moved to a lower priority queue, while a process that yields early (indicating I/O-intensive work) stays in high-priority queues. This dynamic adaptation helps systems automatically tune their scheduling behavior to workload patterns.\nReal-time Scheduling introduces concepts like deadline scheduling and rate-monotonic scheduling for systems where missing deadlines has serious consequences. These algorithms prioritize predictability over efficiency, ensuring critical tasks meet their timing requirements even if overall system throughput suffers.\nKey Takeaways Working through these CPU scheduling algorithms shows how fundamental principles remain relevant as technology evolves.\nThe same trade-offs that operating system designers grappled with in the 1960s appear throughout modern distributed systems. Whether you\u0026rsquo;re optimizing Docker container resource allocation, tuning Kubernetes cluster performance, designing microservice architectures, or debugging database query performance, you\u0026rsquo;re fundamentally dealing with resource scheduling decisions.\nThese algorithms appear in many systems. When some requests are faster than others, when workloads interfere with each other, or when system performance varies under load, these are often scheduling decisions at work.\nBuilding this simulator helps develop intuition for resource contention, performance trade-offs, and system optimization. These concepts are useful when designing systems that handle varying loads, debugging performance problems, or making architectural decisions.\nThese principles apply when scaling web applications, optimizing data pipelines, or working with distributed systems.\nThe next step would be implementing memory paging algorithms to understand how systems manage both CPU time and memory space.\nReferences:\nTanenbaum, Andrew S., and Herbert Bos. \u0026ldquo;Modern Operating Systems\u0026rdquo; 4th Edition, Chapter 2.5: Scheduling. Pearson, 2014. Kubernetes Scheduling and Eviction Documentation - Official Kubernetes documentation on pod scheduling Nginx HTTP Load Balancing - Official Nginx load balancing configuration guide PostgreSQL Documentation: Resource Consumption - Database resource management configurations Silberschatz, Abraham, Peter B. Galvin, and Greg Gagne. \u0026ldquo;Operating System Concepts\u0026rdquo; 10th Edition, Chapter 5: CPU Scheduling. Wiley, 2018. Feel free to reach out with feedback on technical details or suggestions for improvements.\n","permalink":"https://yinebebt.com/post/cpu-scheduler/","summary":"\u003cp\u003eAs software engineers, we often work with systems that abstract away the fundamental concepts of operating systems. We deploy containers to Kubernetes, scale web services, and optimize database queries without thinking deeply about the underlying resource management. Yet understanding these foundations becomes crucial when we hit performance walls or design systems that need to handle thousands of concurrent operations efficiently.\u003c/p\u003e","title":"Understanding CPU Scheduling Algorithms"},{"content":"Model Context Protocol (MCP) enables AI assistants to interact with external tools and services. In this guide, we\u0026rsquo;ll build a simple calculator server that demonstrates MCP concepts by creating a tool that Claude (or other MCP clients) can use to perform mathematical calculations.\nWhat is MCP? The Model Context Protocol (MCP) is an open standard that enables AI assistants to securely connect to data sources and tools. Instead of hardcoding integrations, MCP allows you to create servers that expose tools, resources, and prompts that AI assistants can dynamically discover and use.\nKey Components MCP Server: Exposes tools and resources (like our calculator) MCP Client: AI assistants like Claude Desktop that consume MCP services Tools: Functions that the AI can call (like our calculate function) JSON-RPC: The underlying communication protocol Building the Calculator Server Let\u0026rsquo;s start by examining our MCP calculator server implementation:\npackage main import ( \u0026#34;context\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;log\u0026#34; \u0026#34;github.com/mark3labs/mcp-go/mcp\u0026#34; \u0026#34;github.com/mark3labs/mcp-go/server\u0026#34; ) func main() { // Create MCP server s := server.NewMCPServer(\u0026#34;calculator-server\u0026#34;, \u0026#34;0.1.0\u0026#34;) // Add calculator tool s.AddTool(mcp.NewTool( \u0026#34;calculate\u0026#34;, mcp.WithDescription(\u0026#34;Perform basic mathematical operations like add, subtract, multiply, and divide on two numbers\u0026#34;), mcp.WithString(\u0026#34;expression\u0026#34;, mcp.Description(\u0026#34;A mathematical expression to evaluate (e.g., \u0026#39;2 + 3\u0026#39;, \u0026#39;10 * 5\u0026#39;, \u0026#39;15 / 3\u0026#39;)\u0026#34;), mcp.Required(), ), ), handleCalculate) // Serve using stdio if err := server.ServeStdio(s); err != nil { log.Fatalf(\u0026#34;Server error: %v\u0026#34;, err) } } Tool Registration The heart of our MCP server is tool registration. We define:\nTool name: calculate Description: What the tool does Parameters: Input schema (expression string) Handler: The function that processes requests Understanding the MCP Protocol Flow 1. Tool Name Resolution When you configure an MCP server in Claude Desktop, it gets a namespace:\nlocal__calculator__calculate │ │ │ │ │ └── Function: \u0026#34;calculate\u0026#34; │ └── Category: \u0026#34;calculator\u0026#34; (from config) └── Namespace: \u0026#34;local\u0026#34; The server uses simple tool names such as \u0026ldquo;calculate\u0026rdquo;, but the client adds namespace prefixes for organization.\n2. Client-Server Communication The communication follows this JSON-RPC flow:\nTool Discovery First, the client asks for available tools:\n{ \u0026#34;jsonrpc\u0026#34;: \u0026#34;2.0\u0026#34;, \u0026#34;id\u0026#34;: 2, \u0026#34;method\u0026#34;: \u0026#34;tools/list\u0026#34;, \u0026#34;params\u0026#34;: {} } Server responds with:\n{ \u0026#34;id\u0026#34;: 2, \u0026#34;jsonrpc\u0026#34;: \u0026#34;2.0\u0026#34;, \u0026#34;result\u0026#34;: { \u0026#34;tools\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;calculate\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Perform basic mathematical operations like add, subtract, multiply, and divide on two numbers\u0026#34;, \u0026#34;inputSchema\u0026#34;: { \u0026#34;properties\u0026#34;: { \u0026#34;expression\u0026#34;: { \u0026#34;description\u0026#34;: \u0026#34;A mathematical expression to evaluate (e.g., \u0026#39;2 + 3\u0026#39;, \u0026#39;10 * 5\u0026#39;, \u0026#39;15 / 3\u0026#39;)\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; } }, \u0026#34;required\u0026#34;: [\u0026#34;expression\u0026#34;], \u0026#34;type\u0026#34;: \u0026#34;object\u0026#34; } } ] } } Tool Execution When Claude wants to calculate \u0026ldquo;3 + 3\u0026rdquo;, it sends:\n{ \u0026#34;jsonrpc\u0026#34;: \u0026#34;2.0\u0026#34;, \u0026#34;id\u0026#34;: 3, \u0026#34;method\u0026#34;: \u0026#34;tools/call\u0026#34;, \u0026#34;params\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;calculate\u0026#34;, \u0026#34;arguments\u0026#34;: { \u0026#34;expression\u0026#34;: \u0026#34;3 + 3\u0026#34; } } } Our server processes this and responds:\n{ \u0026#34;id\u0026#34;: 3, \u0026#34;jsonrpc\u0026#34;: \u0026#34;2.0\u0026#34;, \u0026#34;result\u0026#34;: { \u0026#34;content\u0026#34;: [ { \u0026#34;text\u0026#34;: \u0026#34;6\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;text\u0026#34; } ] } } Testing the MCP Server I\u0026rsquo;ve created a test client to help debug MCP servers. Here\u0026rsquo;s how the interaction looks:\ngo run -tags=client client.go /home/yina/go/bin/mcp-calculator-server Test Results Starting MCP server: /home/yina/go/bin/mcp-calculator-server ============================================================ MCP Client - Choose an action: 1. Initialize connection 2. List available tools 3. Calculate 3 + 3 4. Calculate custom expression 5. Send custom JSON-RPC message 6. Exit ============================================================ Configuration and Setup Claude Desktop Configuration Add this to the Claude Desktop configuration:\n{ \u0026#34;mcpServers\u0026#34;: { \u0026#34;calculator\u0026#34;: { \u0026#34;command\u0026#34;: \u0026#34;/path/to/mcp-calculator-server\u0026#34; } } } The key \u0026quot;calculator\u0026quot; becomes the category in the namespaced tool name.\nInstallation go install github.com/yinebebt/mcp-calculator-server Key Insights 1. Naming Convention Server side: Use simple, descriptive names (calculate) Client side: Automatically adds namespacing (local__calculator__calculate) Configuration: The category comes from the MCP client config 2. Protocol Structure Initialization: Establish connection and capabilities Discovery: List available tools Execution: Call tools with parameters and return structured results 3. Error Handling Proper error responses follow the JSON-RPC spec:\n{ \u0026#34;error\u0026#34;: { \u0026#34;code\u0026#34;: -32602, \u0026#34;message\u0026#34;: \u0026#34;tool \u0026#39;invalid-tool\u0026#39; not found: tool not found\u0026#34; }, \u0026#34;id\u0026#34;: 3, \u0026#34;jsonrpc\u0026#34;: \u0026#34;2.0\u0026#34; } Real-World Usage Once configured, you can ask Claude:\n\u0026ldquo;Can you calculate 25 * 4?\u0026rdquo; \u0026ldquo;What\u0026rsquo;s 15 divided by 3?\u0026rdquo; \u0026ldquo;Help me with this math: (10 + 5) * 2\u0026rdquo; Claude will automatically use the calculator server to provide accurate results.\nConclusion Building an MCP server is straightforward once you understand the protocol flow. The key principles are:\nSimple tool registration with clear descriptions Proper JSON-RPC handling for communication Structured responses that clients can process Good error handling for robust operation This calculator example demonstrates the fundamentals. MCP can support more complex integrations including database queries, API calls, and file system operations.\nMCP provides a modular, protocol-driven architecture where AI assistants can discover and use tools to help users accomplish their goals.\nSource code available at: github.com/yinebebt/mcp-calculator-server\n","permalink":"https://yinebebt.com/post/build-mcp-server/","summary":"\u003cp\u003eModel Context Protocol (MCP) enables AI assistants to interact with external tools and services. In this guide, we\u0026rsquo;ll build a simple calculator server that demonstrates MCP concepts by creating a tool that Claude (or other MCP clients) can use to perform mathematical calculations.\u003c/p\u003e","title":"Building an MCP Server: A Complete Guide"},{"content":"We\u0026rsquo;ll start with a minimal Go server, package it with Docker, spin up a local cluster using Minikube, and write the manifests to get it running.\nWhy Kubernetes for Go Applications? Kubernetes provides orchestration that makes managing applications at scale easier. Here\u0026rsquo;s why Kubernetes works well for Go applications:\nAutomatic Scaling and Self-Healing: Kubernetes automatically scales your application based on traffic and restarts failed containers, ensuring high availability. Zero-Downtime Deployments: Rolling updates allow you to deploy new versions of your application without downtime. Service Discovery and Load Balancing: Kubernetes provides built-in mechanisms for service discovery and distributes traffic evenly across your application instances. Resource Optimization: Kubernetes allocates resources (CPU, memory) across your cluster, ensuring optimal utilization. Unified Management: Kubernetes simplifies the management of microservices by providing a single platform to deploy, scale, and monitor your applications. Project Overview Our tech stack will include:\nGo for the web server Docker for containerization Minikube for local Kubernetes cluster kubectl for cluster management Here\u0026rsquo;s the project structure:\nProject Structure: ./ ├── k8s.yaml ├── Dockerfile ├── main.go └── README.md Core Kubernetes Concepts Before diving into the implementation, let’s understand the core Kubernetes concepts we’ll be using:\n1. Pods: The Atomic Unit Pods are the smallest deployable units in Kubernetes. A Pod can contain one or more containers that share the same network and storage namespace. In our case, the Go application will run in a single-container Pod. Pods are ephemeral, meaning they can be created, destroyed, and replaced dynamically.\n2. Deployments: State Management Deployments manage the desired state of your application. They ensure that a specified number of Pod replicas are running at all times. Deployments also handle rolling updates and rollbacks, making them ideal for managing stateless applications like our Go web server.\n3. Services: Network Abstraction Services provide a stable network endpoint to access your Pods. They abstract away the dynamic nature of Pod IPs by providing a consistent DNS name and IP address. In our example, we’ll use a LoadBalancer service to expose our Go application to the outside world.\nhttps://kubernetes.io/images/docs/components-of-kubernetes.svg\nStep-by-Step Implementation 1. Go Application Let’s start by creating a minimal Go web server. Here’s the code for main.go:\npackage main import ( \u0026#34;context\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;log\u0026#34; \u0026#34;net/http\u0026#34; \u0026#34;os\u0026#34; \u0026#34;os/signal\u0026#34; \u0026#34;time\u0026#34; ) func welcomeHandler(w http.ResponseWriter, _ *http.Request) { _, err := fmt.Fprintln(w, \u0026#34;Hello, Welcome to Kubernetes world!\u0026#34;) if err != nil { log.Printf(\u0026#34;Error writing response: %v\u0026#34;, err) } } func main() { mux := http.NewServeMux() mux.HandleFunc(\u0026#34;/\u0026#34;, welcomeHandler) server := \u0026amp;http.Server{ Addr: \u0026#34;:8080\u0026#34;, Handler: mux, } // channel to listen for OS signals stop := make(chan os.Signal, 1) signal.Notify(stop, os.Interrupt) go func() { log.Println(\u0026#34;k8s-go is running at port 8080 ...\u0026#34;) if err := server.ListenAndServe(); err != nil \u0026amp;\u0026amp; err != http.ErrServerClosed { log.Fatalf(\u0026#34;server error: %v\u0026#34;, err) } }() \u0026lt;-stop log.Println(\u0026#34;shutting down server...\u0026#34;) // graceful shutdown ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() if err := server.Shutdown(ctx); err != nil { log.Fatalf(\u0026#34;server forced to shutdown: %v\u0026#34;, err) } log.Println(\u0026#34;server exited gracefully\u0026#34;) } This server includes graceful shutdown handling, proper error logging, and structured HTTP routing - essential for production Kubernetes deployments.\n2. Containerization with Docker Next, we’ll containerize the Go application using a multi-stage Dockerfile. This approach ensures that the final image is lightweight by only including the necessary runtime dependencies.\n# Build stage FROM golang:1.23-alpine AS builder WORKDIR /app COPY main.go . RUN GO111MODULE=off go build -o main . # Runtime stage FROM alpine:3.21 WORKDIR /app COPY --from=builder /app/main ./main EXPOSE 8080 CMD [\u0026#34;./main\u0026#34;] To build and push the Docker image, run:\ndocker build -t yinebeb/k8s-go:1.2.1 . docker push yinebeb/k8s-go:1.2.1 3. Kubernetes Cluster Setup with Minikube To run Kubernetes locally, we’ll use Minikube. Here’s how to set it up:\n# Linux installation curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube \u0026amp;\u0026amp; rm minikube-linux-amd64 # Start the cluster minikube start Verify that the cluster is running:\nminikube status Deployment Configuration Unified Manifest (k8s.yaml) We use a single unified configuration file that contains both the Deployment and Service resources, separated by ---. This approach simplifies deployment management and keeps related resources together.\n# Deployment Resource apiVersion: apps/v1 kind: Deployment metadata: name: k8s-go-deployment spec: replicas: 4 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% # allow 25% pods unavailable during update maxSurge: 25% # allow temporary scaling up selector: matchLabels: app: k8s-go template: metadata: labels: app: k8s-go spec: containers: - name: k8s-go image: yinebeb/k8s-go:1.2.1 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 resources: requests: memory: \u0026#34;64Mi\u0026#34; cpu: \u0026#34;100m\u0026#34; limits: memory: \u0026#34;128Mi\u0026#34; cpu: \u0026#34;500m\u0026#34; livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 5 periodSeconds: 10 readinessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 5 periodSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: k8s-go-service spec: type: LoadBalancer selector: app: k8s-go ports: - protocol: TCP port: 80 targetPort: 8080 Key Components:\nReplica Count: Set to 4 for high availability and load distribution Rolling Update Strategy: Ensures zero-downtime deployments with controlled pod replacement Health Probes: livenessProbe: Restarts containers that become unresponsive readinessProbe: Ensures traffic only goes to ready pods Resource Management: Defines CPU and memory requests/limits for efficient cluster utilization LoadBalancer Service: Exposes the application externally via port 80, routing to container port 8080 Deployment Workflow Apply the configuration: kubectl apply -f k8s.yaml Verify the deployment: kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE k8s-go-deployment 4/4 4 4 1m Check pod status: kubectl get pods NAME READY STATUS RESTARTS AGE k8s-go-deployment-7cb5459755-4mf9t 1/1 Running 0 1m k8s-go-deployment-7cb5459755-5bljp 1/1 Running 0 1m k8s-go-deployment-7cb5459755-mbphr 1/1 Running 0 1m k8s-go-deployment-7cb5459755-t5qp6 1/1 Running 0 1m Access the service: minikube service k8s-go-service --url # Output: http://192.168.49.2:32657 Test the endpoint: curl http://192.168.49.2:32657 # Output: Hello, Welcome to Kubernetes world! Essential Operations Scaling # Scale horizontally kubectl scale deployment k8s-go-deployment --replicas=5 # Auto-scaling kubectl autoscale deployment k8s-go-deployment --cpu-percent=50 --min=3 --max=10 Updates and Rollbacks # Update image version kubectl set image deployment/k8s-go-deployment k8s-go=yinebeb/k8s-go:1.2.1 # Monitor rollout kubectl rollout status deployment/k8s-go-deployment # Rollback to previous version kubectl rollout undo deployment/k8s-go-deployment Debugging Techniques # Inspect pod events kubectl describe pod k8s-go-deployment-xxxxx # Follow logs in real-time kubectl logs -f k8s-go-deployment-xxxxx # Exec into container kubectl exec -it k8s-go-deployment-xxxxx -- /bin/sh Production-Ready Best Practices Configuration Management: Use ConfigMaps for environment variables. Store sensitive data in Kubernetes Secrets. Implement namespaces for environment separation. Set securityContext in PodSpec for enhanced security. Monitoring: # Install metrics server kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml # View resource usage kubectl top nodes Next Steps Expand your cluster with:\nIngress controllers for path-based routing. Persistent Volumes for stateful applications. Helm charts for package management. Prometheus and Grafana for monitoring. Conclusion In this tutorial, you’ve learned how to:\nContainerize a Go application using Docker. Deploy the application on a local Kubernetes cluster using Minikube. Implement essential Kubernetes operations like scaling, updates, and debugging. The full code is available on GitHub.\n","permalink":"https://yinebebt.com/post/k8s-go-app/","summary":"\u003cp\u003eWe\u0026rsquo;ll start with a minimal Go server, package it with Docker, spin up a local cluster using Minikube, and write the manifests to get it running.\u003c/p\u003e","title":"Learning Kubernetes: Deploying a Go Server from Scratch"},{"content":"Personal Integrity is the state of being whole and undivided, having internal unity and coherence. In other words, personal integrity refers to an unwavering commitment to moral and ethical principles that shape an individual\u0026rsquo;s actions and decisions. It is not just about big, life-altering decisions; it often shows up in the small choices we make daily.\nPersonal integrity can be demonstrated in various circumstances. Here are some key examples:\nHonesty in Conversations: Always telling the truth. Keeping Promises: Following through on commitments. Admitting Mistakes: Taking responsibility for errors. Standing Up for What is Right: Defending principles and fairness. Adherence to Laws and Rules: Respecting regulations and guidelines. Consistency in Behaviors: Being someone whose actions align predictably with their values. Avoiding Gossip: Speaking positively about others or remaining silent. Standing by Your Values: Staying true to what you believe, even when it is challenging. Generosity without recognition is a key principle, as highlighted by Oprah Winfrey: \u0026ldquo;Real integrity is doing the right thing knowing that nobody is going to know whether you did it or not.\u0026rdquo; - Oprah Winfrey\nIntegrity is a choice, a commitment, and a lifestyle. It has a ripple effect, influencing others positively. However, integrity also has risks; others may not understand what we are doing, as integrity is an internal quality where people can\u0026rsquo;t see what we truly feel while acting.\nTo sum up, understanding societal moral codes, norms, spiritual beliefs, and national laws, and committing to the ones you deem true and right is the essence of true integrity.\nThanks for reading!\n","permalink":"https://yinebebt.com/post/personal-integrity/","summary":"\u003cp\u003e\u003cstrong\u003ePersonal Integrity\u003c/strong\u003e is the state of being whole and undivided, having internal unity and coherence. In other words, personal integrity refers to an unwavering commitment to moral and ethical principles that shape an individual\u0026rsquo;s actions and decisions. It is not just about big, life-altering decisions; it often shows up in the small choices we make daily.\u003c/p\u003e","title":"Personal Integrity: The Core of Internal Quality"},{"content":"Recently, I contributed to Tinode Chat1, an open-source chat platform, by adding a link preview feature2. This post highlights implementation details and lessons learned from collaborating with the Tinode community.\nThe Purpose of Link Previews Link previews enhance chat conversations by displaying a summary of the shared link, including the title, description, and an image. Instead of showing a bare URL, users can quickly grasp the content behind the link.\nFor example, a link to https://yahoo.com can be rendered as:\nExample link preview\nImplementation Overview The link preview feature was implemented in Go as Tinode Chat is pure Go backend. Here’s a breakdown of the key components:\n1. Fetching URL Content The first step involves fetching the URL\u0026rsquo;s content using an HTTP GET request. URL validation is crucial to prevent non-routable IPs, such as private or loopback addresses.\nreq, err := http.NewRequest(http.MethodGet, u, nil) resp, err := client.Do(req) 2. Extracting Metadata Once the content is retrieved, the HTML is parsed to extract metadata using Open Graph (OG) tags, such as:\nog:title og:description og:image If OG tags are unavailable, standard meta tags like \u0026lt;meta name=\u0026quot;description\u0026quot;\u0026gt; are used as a fallback.\nfunc extractMetadata(body io.Reader) *linkPreview { var preview linkPreview tokenizer := html.NewTokenizer(body) for { switch tokenizer.Next() { case html.StartTagToken: tagName, _ := tokenizer.TagName() if tagName == atom.Meta { /* Extract meta tags */ } if tagName == atom.Title { inTitleTag = true } case html.TextToken: if inTitleTag \u0026amp;\u0026amp; preview.Title == \u0026#34;\u0026#34; { preview.Title = tokenizer.Token().Data } } } return \u0026amp;preview } 3. Sanitizing Data To ensure a consistent and safe user experience, the extracted metadata is sanitized by limiting the length of the title, description, and image URL.\nfunc sanitizePreview(preview linkPreview) *linkPreview { if utf8.RuneCountInString(preview.Title) \u0026gt; 80 { preview.Title = string([]rune(preview.Title)[:80]) } return \u0026amp;linkPreview{ Title: strings.TrimSpace(preview.Title), Description: strings.TrimSpace(preview.Description), ImageURL: strings.TrimSpace(preview.ImageURL), } } For the complete implementation, check the linkpreview.go3 file.\nConclusion Feedback from the Tinode community helped refine the implementation and improve code quality. Considering edge cases, ensuring URL validation, and limiting the response body size are necessary for the service that fetches external content.\nContributing the link preview feature to Tinode Chat was a rewarding experience that highlighted the importance of secure coding practices, collaboration, and user-centric design. I look forward to making more contributions to open-source projects in the future.\nThanks for reading!\nTinode-Chat: https://github.com/tinode/chat\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nLink-preview: https://github.com/tinode/chat/issues/820\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nlinkpreview.go: https://github.com/tinode/chat/blob/devel/server/linkpreview.go\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","permalink":"https://yinebebt.com/post/link-preview/","summary":"\u003cp\u003eRecently, I contributed to Tinode Chat\u003csup id=\"fnref:1\"\u003e\u003ca href=\"#fn:1\" class=\"footnote-ref\" role=\"doc-noteref\"\u003e1\u003c/a\u003e\u003c/sup\u003e, an open-source chat platform, by adding a link preview feature\u003csup id=\"fnref:2\"\u003e\u003ca href=\"#fn:2\" class=\"footnote-ref\" role=\"doc-noteref\"\u003e2\u003c/a\u003e\u003c/sup\u003e.\nThis post highlights implementation details and lessons learned from collaborating with the Tinode community.\u003c/p\u003e","title":"Link Preview Feature for Chat App"},{"content":"Whether you need a new service built from scratch, an existing system scaled, or infrastructure automated, I can help. Here\u0026rsquo;s what I offer.\n5+ Years Experience Top Rated Plus Upwork Status Backend Development API Development RESTful API design and implementation WebSocket for real-time features Database integration (PostgreSQL, Redis, MongoDB) Third-party service integration API documentation (OpenAPI/Swagger) Authentication \u0026amp; authorization (JWT, OAuth) System Architecture Application architecture for maintainability Database schema design and query optimization Microservices patterns Performance optimization and benchmarking Caching strategies with Redis Message queues and async processing DevOps \u0026amp; Deployment CI/CD pipeline setup (GitHub Actions, GitLab CI) Docker containerization Database migrations Monitoring and logging (Prometheus, Grafana) Cloud deployment (AWS, GCP) Infrastructure as Code (Terraform) How I Work Discovery — Discuss requirements and constraints Planning — Create technical plan and timeline Development — Iterative delivery with regular updates Testing — Comprehensive quality assurance Deployment — Smooth launch with documentation Support — Post-launch maintenance as needed Flexible engagement: hourly, project-based, or retainer. Timezone-friendly (UTC+3).\nTech Stack Primary Technologies:\nGo PostgreSQL Redis Docker Git Kubernetes AWS GCP GitHub Actions Terraform Frequently Asked Questions Q: What\u0026rsquo;s your availability? A: Available for new projects. I work full-time hours and can accommodate different timezones (UTC+3 base).\nQ: Do you work on existing codebases?\nA: Absolutely! I\u0026rsquo;m experienced in working with existing systems, refactoring legacy code, and adding new features to established applications.\nQ: What\u0026rsquo;s your preferred engagement model?\nA: I\u0026rsquo;m flexible and can work hourly, on a project basis, or through a monthly retainer depending on your needs.\nQ: Do you provide ongoing support?\nA: Yes, I offer maintenance and support packages for projects after initial development is complete.\nReady to Start? Have a project in mind? Let\u0026rsquo;s discuss your requirements.\nHire on Upwork Send an Email ","permalink":"https://yinebebt.com/services/","summary":"\u003cp\u003eWhether you need a new service built from scratch, an existing system scaled, or infrastructure automated, I can help. Here\u0026rsquo;s what I offer.\u003c/p\u003e\n\u003cdiv class=\"stats-bar\"\u003e\n  \u003cdiv class=\"stat-item\"\u003e\n    \u003cspan class=\"stat-number\"\u003e5+\u003c/span\u003e\n    \u003cspan class=\"stat-label\"\u003eYears Experience\u003c/span\u003e\n  \u003c/div\u003e\n  \u003cdiv class=\"stat-item\"\u003e\n    \u003cspan class=\"stat-number\"\u003eTop Rated Plus\u003c/span\u003e\n    \u003cspan class=\"stat-label\"\u003eUpwork Status\u003c/span\u003e\n  \u003c/div\u003e\n\u003c/div\u003e\n\u003chr\u003e\n\u003ch2 id=\"backend-development\"\u003eBackend Development\u003c/h2\u003e\n\u003ch3 id=\"api-development\"\u003eAPI Development\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eRESTful API design and implementation\u003c/li\u003e\n\u003cli\u003eWebSocket for real-time features\u003c/li\u003e\n\u003cli\u003eDatabase integration (PostgreSQL, Redis, MongoDB)\u003c/li\u003e\n\u003cli\u003eThird-party service integration\u003c/li\u003e\n\u003cli\u003eAPI documentation (OpenAPI/Swagger)\u003c/li\u003e\n\u003cli\u003eAuthentication \u0026amp; authorization (JWT, OAuth)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3 id=\"system-architecture\"\u003eSystem Architecture\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eApplication architecture for maintainability\u003c/li\u003e\n\u003cli\u003eDatabase schema design and query optimization\u003c/li\u003e\n\u003cli\u003eMicroservices patterns\u003c/li\u003e\n\u003cli\u003ePerformance optimization and benchmarking\u003c/li\u003e\n\u003cli\u003eCaching strategies with Redis\u003c/li\u003e\n\u003cli\u003eMessage queues and async processing\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3 id=\"devops--deployment\"\u003eDevOps \u0026amp; Deployment\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eCI/CD pipeline setup (GitHub Actions, GitLab CI)\u003c/li\u003e\n\u003cli\u003eDocker containerization\u003c/li\u003e\n\u003cli\u003eDatabase migrations\u003c/li\u003e\n\u003cli\u003eMonitoring and logging (Prometheus, Grafana)\u003c/li\u003e\n\u003cli\u003eCloud deployment (AWS, GCP)\u003c/li\u003e\n\u003cli\u003eInfrastructure as Code (Terraform)\u003c/li\u003e\n\u003c/ul\u003e\n\u003chr\u003e\n\u003ch2 id=\"how-i-work\"\u003eHow I Work\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003eDiscovery\u003c/strong\u003e — Discuss requirements and constraints\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003ePlanning\u003c/strong\u003e — Create technical plan and timeline\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDevelopment\u003c/strong\u003e — Iterative delivery with regular updates\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eTesting\u003c/strong\u003e — Comprehensive quality assurance\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDeployment\u003c/strong\u003e — Smooth launch with documentation\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eSupport\u003c/strong\u003e — Post-launch maintenance as needed\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eFlexible engagement: hourly, project-based, or retainer. Timezone-friendly (UTC+3).\u003c/p\u003e","title":"Backend Development Services - Go, Microservices, DevOps"},{"content":"This is useful when you need a specific version, want to test patches, or contribute to the Go project itself.\nWhy Install from Source? You need a version not yet available as a binary release You want to apply or test a patch before it ships You\u0026rsquo;re contributing to the Go compiler or standard library You want to understand the Go toolchain internals How It Works The Go team manages releases based on tags (versions). You can check out a specific version and install it from source.\nSince Go 1.15, Go uses a Go compiler, meaning it compiles itself. If you already have Go installed, you can use it to compile a newer version, as long as the version gap is within two major releases.\nSteps Clone the source from go.googlesource.com/go (the canonical repository). The GitHub mirror mainly holds the master branch and is primarily for contributors.\nCheck out the version tag you want:\ngit clone https://go.googlesource.com/go cd go git checkout go1.23.0 Build from the src directory: cd src ./all.bash This runs the build and all tests. Use ./make.bash if you want to skip tests.\nAdd to your PATH: export PATH=$HOME/go/bin:$PATH Key Points You need an existing Go installation (within two major versions) to bootstrap the build The build takes a few minutes depending on your hardware all.bash runs both the build and the full test suite For the full guide, see the official docs: go.dev/doc/install/source\n","permalink":"https://yinebebt.com/post/install-go-from-source/","summary":"\u003cp\u003eThis is useful when you need a specific version, want to test patches, or contribute to the Go project itself.\u003c/p\u003e","title":"Install Go from Source"},{"content":"If you\u0026rsquo;re already familiar with SSH\u0026rsquo;s purpose and importance in Git operations, let\u0026rsquo;s dive right in.\nScenario 1: Single User For beginners or those managing a single Git repository account, follow these steps:\nGenerate SSH Key: ssh-keygen -t ed25519 Add Public Key to Git Account:\nOpen the generated public key file, copy its content. Add the SSH key to your Git account\u0026rsquo;s settings. Test the Configuration:\nssh -T git@github.com Scenario 2: Multiple Accounts per User For users managing multiple projects across different Git accounts:\nGenerate SSH Key with Unique Name. Add Public Key to Respective Git Accounts. Configure Hostname and Identity File Mapping: Create or edit the ~/.ssh/config file. Add mappings for each account. Host x.github.com Hostname github.com PreferredAuthentications publickey IdentityFile ~/.ssh/id_account_x Update Remote Address in Git Configuration: Update the remote name in .git/config to match the new configuration. Scenario 3: Working with Multiple Remote Repositories If you need to have more than one remote repository for a project:\nAdd Additional Remote to Project: Create a new repository on your Git account. Use its SSH address to add a new remote with a distinct name. git remote add remote_name git@github.com:\u0026lt;user\u0026gt;/\u0026lt;project_name\u0026gt;.git Update Remote Hostname if necessary, modify the hostname in .git/config if needed. ","permalink":"https://yinebebt.com/post/configuring-ssh-for-git/","summary":"\u003cp\u003eIf you\u0026rsquo;re already familiar with SSH\u0026rsquo;s purpose and importance in Git operations, let\u0026rsquo;s dive right in.\u003c/p\u003e","title":"Configuring SSH for Git Authentication"},{"content":"Hexagonal Architecture, also known as Ports and Adapters Architecture, has gained traction for its focus on clean separation of concerns and improved maintainability. In this post I share what exactly happens within the hexagon. Let\u0026rsquo;s break down the key components and data flow.\nEntities These are the heart of your domain, representing core concepts with attributes and potentially behavior. They encapsulate data and enforce domain rules. Think of a Product entity with properties like name, price, and a method to calculate discountedPrice.\nRepository This acts as an abstraction layer for accessing and manipulating entities. It defines a port, offering methods like save, find, and delete that operate on entities, independent of the underlying storage mechanism (database, file system, etc.). A ProductRepository interface, for instance, would declare methods like saveProduct(product Product) and findProductById(id uuid).\nService The service layer orchestrates the business logic. It interacts with repositories to retrieve or store entities and implements the core application logic. Services should not depend on specific technologies for presentation or persistence. Imagine a ProductService with methods like createProduct(product Product) that validates data, interacts with the ProductRepository, and performs other business logic.\nController This is the entry point for user interactions. It receives requests from the frontend (web, API, etc.), maps incoming data to domain objects (entities), interacts with services to perform actions, and transforms service responses back into a format suitable for the frontend (JSON). A ProductController might receive a POST request with product details, convert it to a Product entity, call the ProductService to create it, and send a success message back to the frontend.\nData Flow The controller component receives and parses a user request, extracting relevant data to construct a domain object representing the user\u0026rsquo;s input. This domain object is then passed to the appropriate method within the service layer, where business logic related to the requested action is executed. The service interacts with the repository layer using defined methods, allowing the repository to handle the actual persistence logic, such as storing the data in a database using an Object-Relational Mapping (ORM) tool. Upon receiving confirmation or data from the repository, the service prepares a response based on the outcome of the operation. Finally, the controller sends this response back to the user interface, which may include a success message or details of the created entity.\nProject Structure The project keeps adapters and business logic separated so you can swap infrastructure without touching the core domain.\ncmd: Application entry point (main.go) and startup wiring. docs: Generated Swagger artifacts. internal/adapter: External-facing adapters. graphql: GraphQL handler. rest: REST handlers and middleware. repository: Database adapters (sqlite, postgres) plus shared repository wiring. templates: HTML templates and static CSS. internal/core: Domain and application business logic. entity: Domain entities. port: Interfaces for services and repositories. service: Use-case/business service implementations and tests. Current structure in the repository:\nhexagonal-architecture/ ├── cmd/ │ └── main.go ├── docs/ │ ├── docs.go │ ├── swagger.json │ └── swagger.yaml ├── internal/ │ ├── adapter/ │ │ ├── graphql/ │ │ │ └── handler.go │ │ ├── repository/ │ │ │ ├── postgres/ │ │ │ │ └── video.go │ │ │ ├── sqlite/ │ │ │ │ └── video.go │ │ │ └── repository.go │ │ ├── rest/ │ │ │ ├── handler.go │ │ │ └── middleware.go │ │ └── templates/ │ │ ├── css/ │ │ │ └── index.css │ │ ├── footer.html │ │ ├── header.html │ │ └── index.html │ └── core/ │ ├── entity/ │ │ └── video.go │ ├── port/ │ │ ├── repository.go │ │ └── service.go │ └── service/ │ ├── video.go │ └── video_test.go ├── go.mod └── README.md Summary By separating core logic from presentation and persistence, changes in one area have minimal impact on others, promoting maintainability and testability. The core application remains independent of the underlying technologies used for data storage or presentation. Each layer can be tested in isolation with mock implementations, leading to more reliable and efficient testing practices.\nHexagonal Architecture empowers us to build clean, maintainable, and scalable applications. By understanding the roles of each component and the data flow, you can leverage this powerful approach to streamline your development process.\nI built a starter project that demonstrates these concepts in action with Go. Explore the code on GitHub and the live project page on yinebebt.com.\n","permalink":"https://yinebebt.com/post/hexagonal-architecture/","summary":"\u003cp\u003eHexagonal Architecture, also known as Ports and Adapters Architecture, has gained traction for its focus on clean separation of concerns and improved maintainability. In this post I share what exactly happens within the hexagon. Let\u0026rsquo;s break down the key components and data flow.\u003c/p\u003e","title":"Hexagonal Architecture in Go"},{"content":"Each time I powered on my laptop, I found myself going through the same motions — launching Visual Studio Code, IntelliJ, and starting up the required services in the terminal.\nToday, I will show a simple yet powerful fix with a script.\nScript is a text file that contains a sequence of commands for a UNIX-based operating system.\nAs a Linux user, you might be familiar with using the terminal for various operations, but did you know that you can streamline your workflow further by creating desktop shortcuts? Desktop shortcuts provide a convenient way to access frequently used applications, scripts, or commands with a single click. They eliminate the need to remember and type complex commands every time you want to perform a specific task.\nWe can leverage .desktop files, which are text-based configuration files. These files contain metadata about the application, including its name, command to execute, icon, and categories.\nFor instance, let\u0026rsquo;s create a desktop shortcut to start a local development server.\nScript file #!/bin/bash sudo service nginx stop sudo service apache2 restart gnome-terminal --working-directory=/home/yinebeb/path/to/myapp/ code /home/yinebeb/path/to/myapp/ Desktop entry file [Desktop Entry] Name=eTech-erp Comment=stop Nginx, restart Apache2, open Terminal, Code and GoLand Exec=/bin/bash /home/yinebeb/path/to/myapp.sh Icon=/home/yinebeb/path/to/logo.png Terminal=false Type=Application Categories=Utility;Development; Save the script file with .sh extension as in myapp.sh and the desktop entry with .desktop as in myapp.desktop. Move the desktop entry file to .local/share/applications.\nNow make these files executable:\nsudo chmod +x myapp.sh sudo chmod +x myapp.desktop After this, you can access your app on the start menu — just press the super key, type your app name (e.g., \u0026ldquo;eTech\u0026rdquo;), and launch it.\nConclusion By creating shortcuts to commonly used applications and scripts, you can save time, boost productivity, and maintain a well-organized work environment. If you find yourself bogged down by repetitive tasks, consider creating a simple script to automate your routine. Embrace the power of automation and set yourself free to explore new horizons in your daily work life.\n","permalink":"https://yinebebt.com/post/desktop-shortcuts-linux/","summary":"\u003cp\u003eEach time I powered on my laptop, I found myself going through the same motions — launching Visual Studio Code, IntelliJ, and starting up the required services in the terminal.\u003c/p\u003e","title":"Enhance Your Workflow with Desktop Shortcuts on Linux"},{"content":"In the last semester of my B.Sc. degree, I worked on a final year project titled \u0026ldquo;Computer Vision Based Authentication and Access Control System\u0026rdquo; with my team members Yosef Emyayu and Getachew Getu, under the guidance of our advisor.\nKeywords: Computer Vision (CV), Face Recognition, OpenCV, and OTP\nOverview The project builds a Computer Vision based authentication and access control system. The goal: use computer vision for authentication and access control — screening users by their face for access to a campus or specific resources like a data center. Facial features are unique, making this more secure than password-based authentication.\nComputer vision (CV) is a field of computer science that deals with replicating the complex parts of the human visual system and getting the machines to comprehend and understand the visual details present in the data.\nWe used Python\u0026rsquo;s SQLite for user information management (ID number, full name, department) and deployed a Raspberry Pi as the main controller. An interactive GUI built with Python\u0026rsquo;s tkinter allows admins to register users, train the model, and manage the system.\nFace Recognition Approach We used the LBPH (Local Binary Pattern Histogram) face detection algorithm, which gives more accurate results compared to Fisher Face and Eigen Face algorithms. For systems with enough computing power, deep learning methods such as FaceNet could provide better accuracy.\nThe process:\nCapture 50 images of a person from different angles Convert color images to grayscale Train and store encoded data in the database Classify pixels into 3x3 matrix format for LBPH recognition The phases break down into: pre-processing, feature extraction, and classification.\nTwo-Factor Authentication Since CV-based face detection is challenged by low light intensity, we developed a two-factor authentication fallback. When lighting conditions are poor (fog, night), the system prompts for password authentication, then generates a time-bounded one-time password (OTP) sent via email using SMTP.\nSystem Modes User mode: Registered users authenticated via face recognition. On successful match, the servo motor opens the door and attendance is automatically recorded. Guest mode: Unregistered visitors can request access via email notification to the admin. Hardware Components Raspberry Pi 3 (Model B V1.2) Pi Camera v2.1 Presence detection sensor LCD display for user communication Keypad for input Servo motor for door control The servo motor operates from 45 degrees (closed) to 135 degrees (open) to control door movement.\nImplementation The system has two phases:\nRegistration and Training: Admin registers user information (ID, name, gender, email), captures face images via Pi Camera, and trains the LBPH model. Training generates a .yml encoded file used for recognition.\nRecognition: Live camera feed is compared against the trained model. On match, the system displays the person\u0026rsquo;s name, opens the door via servo motor, and records attendance with timestamp. Unknown faces trigger the guest mode flow.\nProject Code Structure headshot.py # Camera capture and image storage recognition.py # Face recognition and matching train_model.py # LBPH model training database.py # User data management otp.py # One-time password generation guest.py # Guest mode handling admin.py # Admin GUI and operations main.py # Application entry point Future Work Integrate a remote server for centralized data management Synchronize data between admin interface and Raspberry Pi Offload image processing for improved efficiency The complete project is available on GitHub.\n","permalink":"https://yinebebt.com/post/cv-auth-monitoring/","summary":"\u003cp\u003eIn the last semester of my B.Sc. degree, I worked on a final year project titled \u0026ldquo;Computer Vision Based Authentication and Access Control System\u0026rdquo; with my team members Yosef Emyayu and Getachew Getu, under the guidance of our advisor.\u003c/p\u003e","title":"Computer Vision Based Authentication and Access Control System"}]