iamben
Email today:<p>Since 2005, 1Password has been on a mission to make security simple, reliable, and accessible for everyone. As the way people work and live online has evolved, so has 1Password.<p>More recently, we’ve invested significantly in new features that make 1Password even more powerful and effortless to use, helping protect what matters most to you, including:<p>- Automatic saving of logins and payment details - Enhanced Watchtower alerts - Faster, more secure device setup - AI-powered item naming - Expanded recovery options - Proactive phishing prevention<p>While 1Password has grown substantially in value and capability, our pricing has remained largely unchanged for many years. To continue investing in innovation and the world-class security you expect, we’re updating pricing for Individual plans, starting March 27, 2026.<p>Current vs New Pricing:<p>Current price: $35.88 USD / year New price: $47.88 USD / year<p>The new price will take effect at your next renewal, provided it’s on or after March 27, 2026. Those occurring prior to March 27, 2026, will continue at the current pricing until your next renewal.<p>If you have any questions, please reach out to support by replying to this email. We’re deeply grateful for your continued trust and support.<p>Thank you, The 1Password Team
TheAlgorist
Apologies for odd title, character limits.<p>I manage my tasks with Taskwarrior and it's been incredibly productive for me. What it does, it does very well. But there's a lot it doesn't do, and that's the problem I'm facing.<p>I've realized I need proper project documentation and management features, but I don't want to replace Taskwarrior. Instead, I'm looking to *complement* it with some type of knowledge base that also has project management features (or vice versa). My ultimate goal is to integrate these systems together via automations.<p>In short, Taskwarrior is lacking when it comes to project documentation.<p>*My criteria:*<p>- Must be open-source - MUST work in the browser (so no mention of Obsidian) - Has basic project management features (interpret as you will) - Rich wiki-like document interface (bidirectional links, nice editing UI, etc.) - Supports iframes (to embed my Taskwarrior views or tables) - Has an API for integration - Not too heavy, I am not a business just a guy<p>*Tools I've been looking at:* Odoo, Silverbullet, Blinko, Logseq, AFFiNE, Docmost, Trillium, Joplin, Dolibarr, Leantime, OpenProject, wiki.js, etc.<p>*Rejected (either not web-based or too restrictive with paid features):* Appflowy, Logseq (local-first), Capacities, Obsidian, Anytype<p>Does anyone know if a tool like this exists? I feel like I'm looking for a sweet spot between a wiki and a project management tool, but the choices are overwhelming :'(
Kapura
I would love examples of positions and industries where AI has been revolutionary. I have a friend at one of the largest consulting firms who has said it'd been a game-changer in terms of processing huge amounts of documentation over a short period of time. Whether or not that gives better results is another question, but I would love to hear more stories of AI actually making things better.
techteach00
Hi, I'm a K-8 technology teacher in NYC. My students are in desperate need of new hardware. The Chromebooks they use now are so slow that they make the children agitated when using them.<p>I'm aware of different grant opportunities that exist, I just thought it was worth inquiring here for a potentially faster solution at acquiring them new hardware.<p>Thank you for listening.
dmpyatyi
YC recently put out a video about the agent economy - the idea that agents are becoming autonomous economic actors, choosing tools and services without human input.<p>It got me thinking: how do you actually optimize for agent discovery? With humans you can do SEO, copywriting, word of mouth. But an agent just looks at available tools in context and picks one based on the description, schema, examples.<p>Has anyone experimented with this? Does better documentation measurably increase how often agents call your tool? Does the wording of your tool description matter across different models (ZLM vs Claude vs Gemini)?
JB_5000
Serious question.<p>Outside government or heavily regulated enterprise, what is Microsoft’s core value prop in 2026?<p>It feels like a lot of adoption is inherited — contracts, compliance, enterprise trust, existing org gravity. Not necessarily technical preference.<p>If you were starting from scratch today with no legacy, no E5 contracts, no sunk cost — how many teams would actually choose the full MS stack over best-of-breed tools?<p>Curious what people here have actually chosen in greenfield builds.
personality0
I'm looking to replace my Alexa with an alternative where I can use a realtime model like Gemini or an STT -> LLM -> TTS pipeline. Should be easy to build with an Arduino or I'd even be happy buying an already made solution.<p>Basic functions should include playing Spotify, asking questions, settings timers.
thesssaism
We took a vague 2-sentence client request for a "Team Productivity Dashboard" and ran it through two different discovery processes: a traditional human analyst approach vs an AI-driven interrogation workflow.<p>The results were uncomfortable. The human produced a polite paragraph summarizing the "happy path." The AI produced a 127-point technical specification that highlighted every edge case, security flaw, and missing feature we usually forget until Week 8.<p>Here is the breakdown of the experiment and why I think "scope creep" is mostly just discovery failure.<p>The Problem: The "Assumption Blind Spot"<p>We’ve all lived through the "Week 8 Crisis." You’re 75% through a 12-week build, and suddenly the client asks, "Where is the admin panel to manage users?" The dev team assumed it was out of scope; the client assumed it was implied because "all apps have logins."<p>Humans have high context. When we hear "dashboard," we assume standard auth, standard errors, and standard scale. We don't write it down because it feels pedantic.<p>AI has zero context. It doesn't know that "auth" is implied. It doesn't know that we don't care about rate limiting for a prototype. So it asks.<p>The Experiment<p>We fed the same input to a senior human analyst and an LLM workflow acting as a technical interrogator.<p>Input: "We need a dashboard to track team productivity. It should pull data from Jira and GitHub and show us who is blocking who."<p>Path A: Human Analyst Output: ~5 bullet points. Focused on the UI and the "business value." Assumed: Standard Jira/GitHub APIs, single tenant, standard security. Result: A clean, readable, but technically hollow summary.<p>Path B: AI Interrogator Output: 127 distinct technical requirements. Focused on: Failure states, data governance, and edge cases. Result: A massive, boring, but exhaustive document.<p>The Results<p>The volume difference (5 vs 127) is striking, but the content difference is what matters. The AI explicitly defined requirements that the human completely "blind spotted":<p>- Granular RBAC: "What happens if a junior dev tries to delete a repo link?" - API Rate Limits: "How do we handle 429 errors from GitHub during a sync?" - Data Retention: "Do we store the Jira tickets indefinitely? Is there a purge policy?" - Empty States: "What does the dashboard look like for a new user with 0 tickets?"<p>The human spec implied these were "implementation details." The AI treated them as requirements. In my experience, treating RBAC as an implementation detail is exactly why projects go over budget.<p>Trade-offs and Limitations<p>To be fair, reading a 127-point spec is miserable. There is a serious signal-to-noise problem here.<p>- Bloat: The AI can be overly rigid. It suggested microservices architecture for what should be a monolith. It hallucinated complexity where none existed. - Paralysis: Handing a developer a 127-point list for a prototype is a great way to kill morale. - Filtering: You still need a human to look at the list and say, "We don't need multi-tenancy yet, delete points 45-60."<p>However, I'd rather delete 20 unnecessary points at the start of a project than discover 20 missing requirements two weeks before launch.<p>Discussion<p>This experiment made me realize that our hatred of writing specs—and our reliance on "implied" context—is a major source of technical debt. The AI is useful not because it's smart, but because it's pedantic enough to ask the questions we think are too obvious to ask.<p>I’m curious how others handle this "implied requirements" problem:<p>1. Do you have a checklist for things like RBAC/Auth/Rate Limits that you reuse? 2. Is a 100+ point spec actually helpful, or does it just front-load the arguments? 3. How do you filter the "AI noise" from the critical missing specs?<p>If anyone wants to see the specific prompts we used to trigger this "interrogator" mode, happy to share in the comments.
Cyberis
I have users who glaze over the minute I mention "notepad." I think they can barely use Windows. But our work requires a level of privacy (regulatory and otherwise) and Windows 11 is just one big data transmitter. I know this is flamebait, but I'd love suggestions for a Linux desktop that looks like Windows, is stable and easy to administer and harden, and works with Dell business grade laptops that we bought new in 2025.
a_protsyuk
I have 2,600+ notes in Apple Notes and can barely find anything.<p>My kid just dumps everything into Telegram saved messages. Running a small research - curious what systems people actually use (not aspire to use).<p>Do you have a setup that works or is everything scattered across 5 apps like mine?
thesvp
We're building AI agents that take real actions — refunds, database writes, API calls.<p>Prompt instructions like "never do X" don't hold up. LLMs ignore them when context is long or users push hard.<p>Curious how others are handling this: - Hard-coded checks before every action? - Some middleware layer? - Just hoping for the best?<p>We built a control layer for this — different methods for structured data, unstructured outputs, and guardrails (https://limits.dev). Genuinely want to learn how others approach it.
codexon
https://www.erdosproblems.com/forum/thread/783<p>> Ah, GPT is right, there is a fatal sign error in the way I tried to handle small primes. There were no obvious fixes, so I ended up going back to Hildebrand's paper to see how he handled small primes, and it turned out that he could do it using a neat inequality ρ(u1)ρ(u2)≥ρ(u1u2) for the Dickman function (a consequence of the log-concavity of this function). Using this, and implementing the previous simplifications, I now have a repaired argument. TerenceTao
parvardegr
Please share your real word experiences. What is a bad one and why?
rakan1
Curious.
dakiol
Hi. I'm looking for a programmable watch with wifi. Ideally I should be able to write custom programs/apps for the watch to display whatever I want to on them (e.g., make the watch make an https call to a server, receive json and render accordingly; allow the watch to receive "notifications" from the server)<p>Also, ideally, no requirement of a smartphone to send-receive data (it's ok to need a smartphone for the initial setup of the watch, though). I know about Pebble, but it doesn't have wifi. I know about some Garmins with wifi but for the kind of apps I want to write, the communication between the watch and the server has to be mediated by a phone. Also, correct me if I'm wrong, I don't want to pay $100/year just to be able to use my custom app in apple watches. I usually don't trust Google either (e.g., they discontinue everything in a blink of an eye).<p>So, what are my options?
marginalia_nu
I've noticed a fairly sharp increase in junk comments lately. Often new accounts, making posts that are very low quality or sometimes completely incoherent.<p>I see glitch comments like this on a fairly regular basis:<p>> 13 60 well and t6ctctfuvuh7hguhuig8h88gd to f6gug7h8j8h6fzbuvubt GB I be cugttc fav uhz cb ibub8vgxgvzdrc to bubuvtxfh tf d xxx h z j gj uxomoxtububonjbk P.l.kvh cb hug tf 6 go k7gtcv8j9j7gimpiiuh7i 8ubg<p>https://news.ycombinator.com/item?id=47068948#47117224<p>or this:<p>> 1662476506<p>https://news.ycombinator.com/item?id=47121737<p>or this:<p>> Аё<p>https://news.ycombinator.com/item?id=47126475<p>Sometimes it's coherent, but completely off topic, like this<p>> when is fivetran coming?<p>https://news.ycombinator.com/item?id=47130567<p>Is clawd running amok, or is someone running botnet C&C via https://news.ycombinator.com/noobcomments or what gives?
helloplanets
sujayk_33
I was just wondering about why there's a constant timeline and no recommendation.
7777777phil
The first-order effects of GLP-1 drugs are obvious: people lose weight, Novo Nordisk and Eli Lilly print money. But what happens when 10-15% of the adult population is on weight-loss medication within a decade? The downstream consequences are less discussed and almost certainly not priced into anything.<p>In 2018, United Airlines switched to lighter paper for its inflight magazine. One ounce per copy. Across 4,500 daily flights, that saved 170,000 gallons of fuel a year [1]. Airlines think about weight at this level of granularity because fuel is their single largest variable cost.<p>Average weight loss on semaglutide is around 35 pounds per person. If 12% of passengers on a typical 737 have been on the drug, that's roughly 750 fewer pounds per flight, the equivalent of shaving the weight off 12,000 magazines. United spent months optimizing paper stock to save $290,000 a year in fuel. GLP-1 adoption across the flying population could quietly save them an order of magnitude more, and ticket prices don't adjust down when passengers get lighter.<p>The food supply chain is more obvious but larger in scale. If a big share of the population eats 20-30% less, demand for calories drops. Not a shift in preferences toward salads. A pharmacological reduction in how much people eat, period. The food industry has dealt with changing tastes before. It has never faced a demand shock from the medical system.<p>Health insurance has a subtler problem. The pitch for GLP-1 coverage is that the drugs prevent expensive conditions downstream: diabetes, heart disease, joint replacements. Probably true. But in America's fragmented insurance market, the company paying for the drug today probably isn't the one insuring that patient in five or ten years. The savings land on someone else's balance sheet. That mismatch could slow adoption by years on its own.<p>Obesity correlates with lower workforce participation and higher absenteeism. If GLP-1s meaningfully reduce obesity rates, aggregate labor supply goes up. More people working, fewer health-related absences. That's a macroeconomic stimulus, except nobody frames it that way because it comes from a pharmaceutical company rather than from Congress.<p>Early data suggests GLP-1s reduce cravings for alcohol, nicotine, and gambling too. Phase 2 trials for opioid use disorder are underway. A weight-loss drug that accidentally dents Diageo's revenue and casino foot traffic was not in anybody's original investment thesis for Ozempic.<p>The effect I find hardest to think about is the psychological one. Weight has been tangled up with shame, identity, and social hierarchy for centuries. What happens to body positivity, the social dynamics of attractiveness, the entire cultural machinery around diet and discipline when weight becomes something you manage with a prescription? I don't have a good framework for it. Nothing comparable has happened before.<p>The market is treating this as a pharma story. The drug companies will capture a fraction of the total value created and destroyed. The rest redistributes across food, airlines, insurance, labor markets, and social behavior. Nobody's model probably covers all of that at once.<p>[1] https://www.cbsnews.com/news/united-hemispheres-magazine-print-edition/<p>EDIT: Formatting
marvin_nora
I spent two weeks running AI agents autonomously (trading, writing, managing projects) and documented the 5 failure modes that actually bit me:<p>1. Auto-rotation: Unsupervised cron job destroyed $24.88 in 2 days. No P&L guards, no human review.<p>2. Documentation trap: Agent produced 500KB of docs instead of executing. Writing about doing > doing.<p>3. Market efficiency: Scanned 1,000 markets looking for edge. Found zero. The market already knew everything I knew.<p>4. Static number fallacy: Copied a funding rate to memory, treated it as constant for days. Reality moved; my number didn't.<p>5. Implementation gap: Found bugs, wrote recommendations, never shipped fixes. Each session re-discovered the same bugs.<p>Built an open-source funding rate scanner as fallout: https://github.com/marvin-playground/hl-funding-scanner<p>Full writeup: https://nora.institute/blog/ai-agents-unsupervised-failures.html<p>Curious what failure modes others have hit running agents without supervision.
arm32
Seems like everybody is just carelessly saying—whatever—to Claude. Client lists, trade secrets. We all know that our agents haven’t signed NDA’s, right? Right?
shaheeniquebal
Hi HN,<p>I’m researching how early-stage AI and health-tech startups think about protecting their innovations.<p>Traditional patents are expensive, slow and often misaligned with how fast AI products evolve. I’m curious:<p>Are founders filing patents early? Are you relying on trade secrets? Publishing defensively? Not worrying about IP at all? Waiting until revenue?<p>We’re collecting responses through a short 60-second survey to better understand real-world behavior:<p>https://forms.gle/8UAytkGNfge4GKrH8<p>If you’d rather just comment here, that’s equally helpful.<p>I’m happy to share aggregated insights back with the community.<p>Thanks, Shaheen
daringrain32781
I ask questions to co workers about a system or why they do something or their opinion. Some of them return a very clearly AI response, sometimes completely missing the point. What’s the point? If I wanted an AI response I’d have asked it myself.<p>This bothers me a bit because if I can expect this kind of response, what does that say about the thought they put into their work, even if they’re using AI for everything coding related?
YuukiJyoudai
Markdown started as a shorthand for HTML. Now it's the default format for documentation, note-taking, knowledge bases, and AI context.<p>What's interesting is how it keeps absorbing new capabilities without changing the format itself:<p>- Mermaid: diagrams from fenced code blocks - KaTeX/MathJax: math rendering from `$...$` syntax - Frontmatter: structured metadata via YAML blocks - MDX: React components embedded in markdown - Obsidian/Logseq: backlinks, canvas views, graph visualization — all from plain .md files<p>The pattern seems to be: the .md file stays human-readable plain text, but renderers get increasingly powerful. Same file, richer output.<p>This makes me wonder where this goes:<p>1. Does markdown keep evolving through renderer conventions until it becomes a de facto interactive document format? (The "HTML path" — HTML barely changed, but CSS/JS/browsers made it capable of anything.)<p>2. Does a new format emerge that can natively express interactivity, collapsible sections, embedded computations? Something between markdown and Jupyter notebooks?<p>3. Or does the answer involve a protocol/middleware layer — where .md files are the source, but some intermediate system (like a language server for documents) adds structure, validation, and interactivity on top?<p>I'm especially curious because of the AI angle. Plain .md files are the most AI-friendly knowledge format — any LLM can read, write, and search them with zero setup. A more complex format might gain expressiveness but lose this property.<p>What's your take? Is .md "good enough forever" with better renderers, or are we heading toward something new?
amin2011
Hi HN,<p>I'm a 15-year-old full-stack developer, and I recently built Codeown (https://codeown.space).<p>The problem I wanted to solve: GitHub is great for code, but not for showing the "journey" or the UI. LinkedIn is too corporate and noisy for raw, work-in-progress (WIP) dev projects. I wanted a dedicated, clean space where developers can just share what they are building, get feedback, and log their progress.<p>Tech Stack: > I built the frontend with React and handle auth via Clerk. I recently had to migrate my backend/DB off Railway's free tier (classic indie hacker struggle!), but it taught me a lot about deployment and optimization.<p>We just hit our first 5 real users today, and the community is slowly starting to form.<p>I’m still learning, and I know the performance and UI can be improved. I would absolutely love your brutal, honest feedback on:<p>The perceived performance (currently working on optimizing the React re-renders).<p>The core idea – is this something you would use to track your side projects?<p>Thanks for taking a look! Happy to answer any technical questions.
moomoo11
I'm building software for a sector that is massive, but one where you don't really need AI. At least, not AI == LLM.<p>And before I go further, let me state up front that I do like AI coding agents. They are great as assistive tools.<p>People say that if the AI bubble pops, the economy tumbles. And okay, I mean the M7 will certainly get rekt but everyone else? Things will recover within a few years. We didn't make it to 2026 AD taking the easy road.<p>You still need to visit the doctor. Goods still need to be delivered. Homes need to be built. We need to drill for oil. People still need to eat. And yes, unfortunately or not, we still need millions of administrators because humans are not 0/1 systems.<p>Am I crazy to think that maybe it won't be that bad? There are still infinite number of things to do, and maybe (call me stupid, whatever) it would be a good turning point for our species if we realize that speculative bubbles are absolutely destructive and not worth it.<p>I don't need a personal assistant to make calls for me to get a restaurant reservation, and I certainly don't care for AI slop videos. I would much rather we have better products and services that actually work, and even if they have rough edges I would prefer people are employed and busy doing something with their lives.<p>Maybe a world where we don't chase endless growth (to escape inflation, pay off debts, whatever the case) would be good. And also we put nerds (not people like us, the engineers, I mean the evil dorks who cosplay as movie super villains) in the toy box again and pick up different toys this time.
emilss
Would you use a backend where you just define schema, access policy, and functions?<p>Basically something like making smart contracts on EVM, but instead they run on a hyperscaler, and have regular backend fundamentals.<p>Here's a mock frenchie made me, was thinking something like this:<p>schema User { email: string @private(owner) name: string @public balance: number @private(owner, admin) }<p>policy { User.read: owner OR role("admin") User.update.balance: role("admin") }<p>function transfer(from: User, to: User, amount: number) { assert(caller == from.owner OR caller.role == "admin") assert(from.balance >= amount) from.balance -= amount to.balance += amount }<p>Was playing with OpenFGA, and AWS Lambda stuff, and got me thinking about this.<p>So you would "deploy" this contract on a hyperscaler, which then let's users access it from your lean js front-end, via something like this:<p>const res = await fetch("https://api.hyperscaler-example.com/c/your-contract-id/transfer", { method: "POST", headers: { "Authorization": "Bearer <user-jwt>", "Content-Type": "application/json" }, body: JSON.stringify({ from: "user_abc", to: "user_xyz", amount: 50 }) });<p>The runtime resolves the caller identity from the JWT, checks the policy rules, runs the function, handles the encryption/decryption of fields and so your frontend never touches any of that.<p>That's it, would you use it? Is there something that does this exactly already? Feeling like building this.
otterley
Just got an email from 1Password:<p>Since 2005, 1Password has been on a mission to make security simple, reliable, and accessible for everyone. As the way people work and live online has evolved, so has 1Password.<p>More recently, we’ve invested significantly in new features that make 1Password even more powerful and effortless to use, helping protect what matters most to you, including:<p>* Automatic saving of logins and payment details<p>* Enhanced Watchtower alerts<p>* Faster, more secure device setup<p>* AI-powered item naming<p>* Expanded recovery options<p>* Proactive phishing prevention<p>While 1Password has grown substantially in value and capability, our pricing has remained largely unchanged for many years. To continue investing in innovation and the world-class security you expect, we’re updating pricing for Family plans, starting March 27, 2026.<p>Current vs New Pricing:<p>* Current price: $59.88 USD / year<p>* New price: $71.88 USD / year<p>The new price will take effect at your next renewal, provided it’s on or after March 27, 2026. Those occurring prior to March 27, 2026, will continue at the current pricing until your next renewal.<p>[Note: this is for family plans; individual plan price increases even higher, percentage-wise!]
piratesAndSons
As software development becomes a commodity thanks to LLM, I wonder why more software developers don't switch to building medical devices to make their careers more secure. Here's why I picked medical devices in particular.<p>1. Natural Moat<p>Since human body hardware is more or less immutable in its most essential parts, you don't have to worry about some LLM hype cycle replacing you. Once you build the product and clear FDA or local certifications, you're set. Unlike Uber destroying the taxi medallion business, healthcare is a beast — no tech startup dares to bypass all the regulations and gatekeeping.<p>2. Regulatory Moat<p>The medical devices I'm talking about require around $50K–$200K for FDA clearance — low enough that any small business can manage it, but high enough to discourage bottom-feeders and Chinese product dumpers. It also lets you avoid the big established healthcare corporations, because this market segment is too small for them to care about, yet large enough for you to pull in $10M–$15M a year in revenue.<p>Medical device manufacturing sidesteps the two fatal flaws of software development: the lack of a moat and static, almost never-changing hardware margins. LLM companies don't care about copyright, IP, or the health of the broader economy — but they can't go head-to-head with the healthcare industry, so you don't have to worry about them at all.
leandrobon
Is there any reliable way to determine whether an image is AI-generated (or AI-edited) versus a real photo that’s been compressed, resized, or edited? Detectors seem brittle and disagree, is there anything that’s dependable enough to automate, or the answer is that you can’t tell from pixels alone?
exabrial
Claude mangles files with <name> as an XML Tag to <n><p>If you you use Claude Desktop, and have it try to edit an XML file containing a tag with <name> , every time the filesystem connector will mange that to <n>.<p>This is causing simple chat threads to extend much longer than needed and the tool simply isn't working correctly.<p>It's impossible to get actual support these days, other than report problems on HN. So here we are, in hopes you press that upvote button and maybe Boris might see this.
sebringj
i do not submit things to hacker news unless its related to my favorite tool ever, literally, that i happened to have made. i made this out of being super lazy and wanted my copilot (works in all ai editors) to run my UI while its coding and validate it at the same time by using the apps. i don't know how to contain how good this is for me to use other than putting it here for people to look at. so using it with opus 4.5-4.6 its extremely good, however using it with gpt-5.3 its still good but you have to remind it to use the "autonomo help" when it forgets how to use it correctly sometimes.<p>anyways, please check it out if you are curious and want very fast efficient UI driven (multi app/web/desktop at the same time, agnostic) validation while you vibe. I just keep using it everyday but still waiting for something to just make this obsolete.<p>web page:<p>https://sebringj.github.io/autonomo/<p>github:<p>https://github.com/sebringj/autonomo
aehsan4004
I spotted a usability gap on X (formerly Twitter)—no way to categorize bookmarks by topic.<p>Suggested it publicly, and months later, they rolled it out with a shoutout from Grok.<p>Resume impact? Worth adding under 'Product Contributions' (e.g., 'Suggested bookmark categorization feature, adopted by X')? Overkill, useless, or a solid signal for PM/UX opportunities?
zekejohn
With everyone using Claude Code, Cursor, Codex, and the other 100 AI coding agents that i missed, I’m wondering how much editor mastery still matters like w/ Vim<p>Being honest the real reason i wanna learn Vim is to boost my ego & assert my dominance, so i can tell people "i use vim btw", but also part of me thinks investing time could still pay off for speed, ergonomics, and working over SSH overall...<p>but a bigger part also suspects the marginal gains i would gain would disappear when more of the work is delegated to AI anyway, like why would i learn Vim if i'm just going to be prompting Opus all day?<p>For anyone who's been using Vim for while AND uses AI to code (i'm assuming everyone codes with AI to some degree) my question is: Does learning Vim still meaningfully improve your day to day productivity EVEN with AI, or is it mostly personal preference at this point?
danver0
I’ve been thinking about AI and developer jobs.<p>It feels like developers who build libraries, frameworks, compilers, and dev tools might be safer from AI replacement compared to people building typical CRUD apps.<p>My intuition is that tooling work requires deeper systems knowledge and taste, while a lot of app-level code is becoming easier for AI to generate.<p>Am I wrong? Curious what others here think
sdgnbs
A free Bionic Reading extension that helps with ADHD and reading speed. It processes the text entirely locally.<p>License: MIT<p>Chrome Web Store: https://chromewebstore.google.com/detail/cllpokdpfkelkceomncfgebkegnjepdc?utm_source=item-share-cb<p>Source Code: https://github.com/the0cp/citius-vide
yc_surajkr
I built Orvia — a real-time, temporary collaboration room for instant conversations and fast media sharing.<p>~200 users have tried it so far. The main feedback wasn’t about missing features, but UX:<p>UI felt too “hacker tool”<p>Empty rooms felt awkward<p>Too many visible actions<p>So I redesigned it to feel calmer and frictionless.<p>The idea is simple: Create a room → Share the link → Talk & share files → Leave → Room disappears.<p>No accounts. No setup. No stored history.<p>It’s built for quick, private, zero-overhead collaboration — not persistent communities.<p>Would really appreciate honest feedback on UX and real-time experience or any missing feature.<p>url - https://orvia.live
lilcodingthings
We run a small coding blog sharing our learning experience and I keep hitting the same loop: Google won't index most of our pages because the site has no authority. Authority comes from backlinks. But nobody links to a site that doesn't show up in search results. 110+ posts and almost nothing showing up on Google. Bing has started indexing them, but Google won't budge. Google indexed us in the beginning, just a bit, then after one of their updates we were destroyed and never re-covered.<p>We've tried long-form guides, short tutorials, long-tail keywords, competitive keywords, niche topics with zero competition. I'm not sure anymore if content is the problem.<p>We have no noindex issues, no missing sitemaps, no crawl errors — all the basic SEO boxes are checked. Site is submitted to Search Console, PageSpeed Insights performance is good (99% desktop, 85+ mobile).<p>We started cross-posting to Medium, Dev.to, and Hashnode but we can't even be sure that's the right path. We've been posting consistently, thinking consistency was the key — but it doesn't seem to be enough on its own.<p>It truly feels like we're fighting an invisible monster without knowing if what we're doing is correct.<p>Don't get me wrong, I'm not ranting — I still have patience left in me. Just thinking out loud here in case someone has been in the same shoes and found a way out.<p>What actually moved the needle for you? We'd appreciate any guidance.
ms_sv
Hello Everyone.<p>Resumes and CVs have a fundamental problem: anyone can write anything. As someone who's been job searching, I've wondered if there's a better way to separate genuine experience from creative writing, I am an engineer at the end of the day not a creative author.<p>I've been thinking about applying something similar to X's Community Notes model to skill verification. The idea: engineers could "fact check" claims on each other's CVs - not as formal references, but as a crowd-sourced verification layer, where you get a check-mark on your skill like X check mark. If someone claims they're an expert in Kubernetes, other engineers who've worked with them (or reviewed their OSS contributions) could validate or challenge that. Also companies have repetitive interviews, why can't I simply do one interview and be "interviewed" fully for all other companies?<p>I put together a rough prototype to illustrate the concept: https://skillverdict.com/<p>Some questions I'm trying to work through(ask more please):<p>How useful will this be for engineers? Would this create its own set of problems? (gaming the system, bias, grudges) Could it scale beyond personal networks? Would companies even trust community-sourced verification?<p>Curious what you guys think about the mechanism itself, not the prototype. Would something like this reduce friction in hiring, or just add another layer of noise?