The project now includes optional extras: additional providers (including Gemini), extended documentation, small security tools, and a tiny test suite. The core stays minimal and portable; extras are opt‑in.
I’d love to get: - feedback on the design and Bash choices - visibility to see if others find this useful - testing on different environments (Linux distros, macOS, WSL, Termux)
Repo: https://github.com/kamaludu/groqbash
Note: I’m not a native English speaker. I read English fairly well, but I usually rely on automatic translators (and sometimes GroqBash itself) when writing. Happy to clarify anything if needed.
In my day-to-day work, our backend is built on top of Echo. Echo is fast and reliable as an HTTP transport, but its high level of freedom leaves architectural decisions almost entirely to individual developers. Over time, this led to a system where execution flow and responsibility boundaries varied depending on who last touched a feature. Maintenance became difficult not because the code was incorrect, but because how requests actually executed was no longer obvious.
I looked for a Go framework that could provide a clear execution model and structural constraints, similar to what Spring or NestJS offer. I couldn’t find one that fit. Moving to Spring or NestJS would also mean giving up some of Go’s strengths—simplicity, performance, and explicit control—so I decided to build one instead.
Spine is an execution-centric backend framework for Go. It aims to provide enterprise-grade structure while deliberately avoiding hidden magic.
What Spine provides • An IoC container with explicit, constructor-based dependency injection • Interceptors with well-defined execution phases (before, after, completion) • First-class support for both HTTP requests and event-driven execution • No annotations, no implicit behavior, no convention-driven wiring
The core idea: execution first
The key difference is Spine’s execution model.
Every request—HTTP or event—flows through a single, explicit Pipeline. The Pipeline is the only component that determines execution order. Actual method calls are handled by a separate Invoker, keeping execution control and invocation strictly separated.
Because of this structure: • Execution order is explainable by reading the code • Cross-cutting concerns live in the execution flow, not inside controllers • Controllers express use cases only, not orchestration logic • You can understand request handling by looking at main.go
This design trades some convenience for clarity. In return, it offers stronger control as the system grows in size and complexity.
My goal with Spine isn’t just to add another framework to the Go ecosystem, but to start a conversation: How much execution flow do modern web frameworks hide, and when does that become a maintenance cost?
The framework itself is currently written in Korean. If English support or internationalization is important to you, feel free to open an issue—I plan to prioritize it based on community interest.
You can find more details, a basic HTTP example, and a simple Kafka-based MSA demo here: Repository: https://github.com/NARUBROWN/spine
Thanks for reading. I’d really appreciate your feedback.
This means you can reuse the same Python functions for server workflows or client-side experiments without any code changes.
⸻
Why it matters • Plug-and-play: one Python function, multiple deployment options. • Instant testing: run your tools locally in the browser or via HTTP. • Enterprise-ready: leverage internal libraries, scripts, and APIs immediately. • Unified interface: MCP agents call your tools the same way, regardless of server or WASM.
⸻
Example
# Python function def calculate_stats(numbers): """Return basic statistics for a list of numbers""" return { "count": len(numbers), "sum": sum(numbers), "mean": sum(numbers)/len(numbers) }
# === WebAssembly MCP === from polymcp import expose_tools_wasm compiler = expose_tools_wasm([calculate_stats]) compiler.compile("./wasm_output")
The same function can also be exposed via HTTP MCP endpoints without modification.
# === HTTP MCP === from polymcp.polymcp_toolkit import expose_tools app = expose_tools([calculate_stats], title="Stats Tools") # Run with: uvicorn server_mcp:app --reload
⸻
Repo: https://github.com/poly-mcp/Polymcp
We’d love to hear how you’re using PolyMCP and feature ideas for future releases.
The idea started as a pretty simple question: text chatbots are everywhere, but they rarely feel present. I wanted something closer to a call, where the character actually reacts in real time (voice, timing, expressions), not just “type, wait, reply”.
Beni is basically:
A Live2D avatar that animates during the call (expressions + motion driven by the conversation)
Real-time voice conversation (streaming response, not “wait 10 seconds then speak”)
Long-term memory so the character can keep context across sessions
The hardest part wasn’t generating text, it was making the whole loop feel synchronized: mic input, model response, TTS audio, and Live2D animation all need to line up or it feels broken immediately. I ended up spending more time on state management, latency and buffering than on prompts.
Some implementation details (happy to share more if anyone’s curious):
Browser-based real-time calling, with audio streaming and client-side playback control
Live2D rendering on the front end, with animation hooks tied to speech / state
A memory layer that stores lightweight user facts/preferences and conversation summaries to keep continuity
Current limitation: sign-in is required today (to persist memory and prevent abuse). I’m adding a guest mode soon for faster try-out and working on mobile view now.
What I’d love feedback on:
Does the “real-time call” loop feel responsive enough, or still too laggy?
Any ideas for better lip sync / expression timing on 2D/3D avatars in the browser?
Thanks, and I’ll be around in the comments.