Fresh off launching the low-cost MacBook Neo, Apple is reportedly preparing at least three new products that will fit into its highest-end "ultra" lineup. According to Bloomberg's Mark Gurman, the next batch of releases may not bear the "ultra" name, like its Watch, but will all command price premiums over their mainline counterparts. There's the […]
VCs are betting that artificial intelligence will disrupt nearly every industry in the world. Are they prepared for it to disrupt their own?
Every MCP server injects its full tool schemas into context on every turn — 30 tools costs ~3,600 tokens/turn whether the model uses them or not. Over 25 turns with 120 tools, that's 362,000 tokens just for schemas.
mcp2cli turns any MCP server or OpenAPI spec into a CLI at runtime. The LLM discovers tools on demand:
mcp2cli --mcp https://mcp.example.com/sse --list # ~16 tokens/tool
mcp2cli --mcp https://mcp.example.com/sse create-task --help # ~120 tokens, once
mcp2cli --mcp https://mcp.example.com/sse create-task --title "Fix bug"
No codegen, no rebuild when the server changes. Works with any LLM — it's just a CLI the model shells out to. Also handles OpenAPI specs (JSON/YAML, local or remote) with the same interface.
Token savings are real, measured with cl100k_base: 96% for 30 tools over 15 turns, 99% for 120 tools over 25 turns.
It also ships as an installable skill for AI coding agents (Claude Code, Cursor, Codex): `npx skills add knowsuchagency/mcp2cli --skill mcp2cli`
Inspired by Kagan Yilmaz's CLI vs MCP analysis and CLIHub.
https://github.com/knowsuchagency/mcp2cli
Comments URL: https://news.ycombinator.com/item?id=47305149
Points: 56
# Comments: 29
The facial recognition question is where things get more tangled.
What are you working on? Any new ideas that you're thinking about?
Comments URL: https://news.ycombinator.com/item?id=47303111
Points: 139
# Comments: 481
The company launched its first product, a Game Boy-style handheld device called the Chromatic, in 2024.