TL;DR
Harvard Business Review published "9 Trends Shaping Work in 2026 and Beyond" in early February. The core message: AI investment expectations remain high, but most deployments are not delivering meaningful returns, and the risks are being systematically underestimated. As someone who has built over 100 AI prototypes and shipped real AI products, I read this with a mix of recognition and frustration. Here is my honest commentary. Not academic, not theoretical, but from the perspective of someone who deals with these realities every day.
AI insights for decision-makers
Weekly. Practical. No spam.
Why This Article Caught My Attention
I read a lot. Technical papers, business strategy books, industry reports. I am often disappointed. Too optimistic, too pessimistic, or too generic to be useful.
The HBR article by Peter Aykens, Kaelyn Lowmaster, Emily Rose McRae, and Jonah Shepp is different. It makes a claim I recognize from my own work: AI expectations have stayed high while reality has not caught up.
My perspective is specific. I have built more than 100 AI prototypes. I have helped companies introduce AI systems into real workflows. I built BuchhaltGenie, an AI-first accounting platform for Austrian businesses, from scratch. And I spent years as an IT project manager inside a large Austrian corporation while building AI products in parallel.
This combination gives me a view that is neither purely academic nor purely vendor-optimistic. Some of what HBR describes, I see daily. Other parts miss important nuances. Here is my trend-by-trend commentary.
Trend 1: The ROI Gap. No Surprise If You Are Honest
The article opens with a finding that should make boardrooms uncomfortable: CEO expectations for AI-driven growth remain high, but the evidence shows most AI investments are not delivering meaningful returns.
I am not surprised. The reason is almost never the technology itself.
The latest models from OpenAI, Anthropic, and the open-source community are capable. The problem is how AI investments are made. Companies buy a tool, expect results from day one, and wonder why nothing changes.
AI is not a plug-in product. It is a capability that needs to be integrated into existing processes, with real data, for specific tasks. This integration takes time and resources. And it regularly fails because the foundations are missing: poor data quality, unclear processes, insufficient internal expertise.
What I see in practice: The companies successfully using AI have one thing in common: they did not start with AI. They started with the problem. What specifically needs to improve? How do we measure it? Only then did they ask whether AI is the right tool.
Reversing this order, buying AI first and looking for problems to solve, is the most expensive mistake I see repeatedly.
Trend 2: Premature Layoffs. The Risk Nobody Wants to Say Out Loud
According to HBR, companies are cutting staff in anticipation of AI-driven productivity gains before those gains have actually materialized. The result: institutional knowledge disappears, and the AI fails to deliver what was promised.
This is not new. It is a replay of outsourcing decisions from the 1990s and 2000s. Back then, companies offshored IT departments assuming external providers would be cheaper and better. Some were right. Many permanently lost valuable operational knowledge.
The same thing is happening with AI. Faster, with less transparency about what is actually being given up.
My position on this: Do not let go of people because of an AI that does not yet exist or whose maturity for your specific requirements is unproven. Use AI to free up capacity for work that was not possible before. That is sustainable. Headcount reductions based on AI promises are not.
Trend 3: Cultural Dissonance. The Underrated Risk
AI implementations rarely fail because of the technology. They fail because people are not brought along.
HBR describes cultural dissonance as a significant risk: when the pace of technological change exceeds an organization's ability to adapt, the friction shows up as productivity loss, turnover, and internal conflict.
I have seen this in European companies. The skepticism toward AI here, particularly in Austria and Germany, is not irrational resistance. It is often legitimate caution. Workers reasonably ask: What happens to my job? What happens to my data? What happens when the system makes a mistake?
Dismissing these questions erodes trust. And without trust, no AI tool will be properly adopted, regardless of how technically sound it is.
My advice: Bring your team in early. Not with a presentation about AI's potential, but with honest answers to the question: "How will my work change?" This requires the courage to be clear, even when some answers are uncomfortable.
Trend 4: Mental Fitness as a Competitive Factor
HBR lists "declining mental fitness" as one of the nine trends. The argument: the acceleration driven by AI, constant availability, and uncertainty about professional futures is increasing psychological load on workers.
This is not a new problem. AI amplifies it.
I have experienced what it means to simultaneously work at a large corporation and build an AI product on the side. It is a question of energy management. Without deliberate boundaries like set working hours, recovery time, and clear priorities, you burn out faster than you can recover.
For organizations, this means: AI-driven productivity gains will not automatically translate into better employee wellbeing. If the time saved is immediately filled with new tasks, the stress increases, not decreases.
What I observe: The most effective people I know who work intensively with AI share one trait: they do not work more, they work more focused. The saved time goes toward recovery, not more output. This sounds simple. It is not, as long as organizational culture rewards quantity over quality.
Trend 5: Low-Quality AI Output. The Quality Problem Nobody Sees
This is the trend that concerns me most personally.
HBR describes the risk of "low-quality AI output": AI-generated content, decisions, and analyses that are not adequately reviewed before they take effect. The consequences range from incorrect customer communications to flawed reports to erroneous automated decisions.
I build AI systems for Austrian compliance requirements: tax law, accounting standards, data protection. In this context, "low-quality output" is not an abstract risk. It is a tax penalty. A GDPR violation. An incorrect accounting entry.
The core problem: the easier AI is to use, the less inclined users feel to critically question its outputs. An LLM that confidently delivers a wrong answer appears more credible than a human who is uncertain but correct.
How I handle this: Every AI system I build has clear boundaries: areas where the AI does not decide autonomously but instead triggers human review. This costs efficiency. It prevents errors whose costs would far exceed the saved efficiency.
Trend 6: Security and Governance. From Nice-to-Have to Mandatory
The EU AI Act has been in force since 2024. From 2025, high-risk AI systems face comprehensive documentation and transparency requirements. In 2026, this is not a future scenario. It is present reality.
HBR describes new security and governance challenges from AI deployment. In the EU legal context, this remains a blind spot for many companies.
For BuchhaltGenie, I built a complete compliance stack: 13 Austrian and EU legal standards, 408+ row-level security policies, GDPR-compliant data handling. This was not optional. It was a prerequisite for a product that processes real financial data.
My assessment for 2026: Companies deploying AI while ignoring governance requirements will pay dearly. Either through regulatory consequences or through loss of trust from customers and partners. The EU AI Act is not optional.
Trend 7: Skills-Based Organizations. The End of the Job Title?
HBR describes the trend toward competency-based organizations: instead of rigid job descriptions, skills take center stage. Who handles which tasks is determined not by title but by demonstrated capability.
This sounds good. And it is, when implemented correctly.
The risk: skills-based organizations require a maturity in talent management that many companies do not yet have. Defining competency profiles, measuring them, and developing them is demanding work. Half-hearted implementation delivers neither the flexibility of a skills-based org nor the stability of traditional job descriptions.
From my perspective as a solo founder: I already live in a skills-based reality. My clients engage me for specific capabilities, not a job title. This means continuous learning, clear communication about what I can do, and honest limits about what I cannot. This is the future for knowledge workers, whether they are employed or independent.
Trend 8: AI Governance as a Management Responsibility
For a long time, AI governance was an IT question. Now it is a management question.
Who decides which AI systems to deploy? Who is responsible when AI makes mistakes? Who monitors whether deployed models still meet current ethical and legal requirements?
These questions are still too rarely discussed at the leadership level in European companies. AI is purchased like software, with a brief evaluation and a purchasing decision. Without a strategic framework, without accountability, without review mechanisms.
What I recommend: Every company seriously using AI needs a clear answer to three questions: Who decides on AI deployment? Who is responsible for failures? How do we regularly assess whether our AI systems are still fit for purpose?
Trend 9: The Hiring Paradox. AI Needs Human Expertise
Perhaps the most counterintuitive trend: despite AI, or precisely because of it, demand for highly qualified human specialists in certain areas is increasing.
AI automates repetitive tasks. It thereby creates time for more complex activities. These complex activities require more experience, more judgment, more domain expertise. Not less.
The result is a market that on one side creates demand for AI expertise and on the other raises requirements for all other roles working with AI outputs. An accountant working with an AI tool must be able to understand and validate the outputs. This requires more accounting knowledge, not less.
My conclusion on this trend: AI does not make people redundant. It changes which human capabilities are in demand. Demand shifts from executors to adjudicators: from those who perform tasks to those who evaluate whether AI results are correct.
What I Take Away from the HBR Article Overall
The article is a valuable reality check. Not because it says anything fundamentally new. Much of it has been known in practice for a while. But because it aggregates these insights in a form that leadership teams should take seriously.
Where the HBR perspective sometimes falls short for European readers: it is US-centric. The regulatory environment in the EU, including the EU AI Act, GDPR, and labor law, creates a different context than the American market. European companies translating these trends should factor that in.
But the core message is universal: AI is reshaping work. The organizations that benefit are not those who invest the most in AI. They are the ones who most clearly understand where AI genuinely helps and where it does not.
That applies to large enterprises. And it applies equally to small and medium-sized businesses across Europe.
If you are thinking through how to use AI meaningfully in your organization, I am available for a conversation. Reach out via the contact page. No sales pressure, just concrete recommendations.
Have an AI project in mind?
Let's analyze your potential together.



