18th April 2025

The Column: Sherif Eltarabishy on Technology, Research and Development

The Column gives you the opportunity to ask our experts about their work, and how it shapes the built environment.

Last month, you submitted your questions for Sherif Eltarabishy, who is a design systems analyst at Foster + Partners, working within the Applied R+D team. Sherif is driving cutting-edge applied machine learning through full-stack software development, geometry, optimisation, XR and digital fabrication.

His responses span a range of topics, including the importance of process mapping, challenges associated with GenAI, and the newly launched Cyclops plugin that turbocharges ray tracing simulations to enhance sustainable design.

How does the Applied R+D team contribute to the practice’s output?

Our Applied R+D team mirrors the multidisciplinary nature of the practice. We benefit from a wide range of different expertise, from architecture and engineering to art, computer science, and applied mathematics.

We also have specific expertise in computational design, performance analysis, optimisation, fabrication and interaction design, and we work with technologies such as augmented reality, AI and machine learning, and real-time simulation to help the architects and their clients to visualise and experience evolving designs.

What really amplifies our work is the fact that we’re embedded in the day-to-day flow of projects across the whole practice. Our ‘clients’ are our colleagues. We help them test different ideas and overcome design challenges – observing and developing new processes, tools, and mechanisms – and receiving their feedback in real time. That tight feedback loop creates a kind of a live R+D environment – and providing this service in-house positions the practice uniquely within the industry.

How are your team harnessing technology and research to improve the architectural design process?

Part of our job is understanding which technological innovations may have transformational potential for the AEC industry – and ensuring that we can adopt them to revolutionise our workflows.

But before we even think about technology, we begin every project with process mapping. We don’t try to shoehorn specific technologies into workflows they weren’t designed for. Instead, we dissect how systems operate – understanding dependencies, bottlenecks, and friction points – and work out how to create more efficient pathways to reach project goals. Once processes have been mapped out, then we can reimagine them from the ground up. The aim is to find the most elegant pathways to reach our goals, improving overall process efficiency and the effectiveness of our systems.

How do you see designers and algorithms working together in a collaborative way?

In recent years, the rise of AI has encouraged designers to question their existing processes and consider alternative ways of working, more so than any other prior technology.

In my view, successful design balances human intuition with computational rigour. Over time, we have seen this intuition evolving across three phases. First, embodied intuition that is deeply personal and tacit. Then, externalised intuition, which surfaced through parametric and computational workflows. And now, we are entering a third phase of distributed intuition, where humans and probabilistic systems co-author outcomes. This shift creates exciting opportunities in practice – but also poses many new challenges.

What are the biggest challenges behind the use of GenAI technologies in the design process?

GenAI covers a range of machine learning models, which are used for content generation across different modalities, such as image, text, music, video and 3D.

One risk we’ve observed, especially with agentic workflows, is ‘review debt.’ This is when generative models rapidly create hundreds of variations, each requiring critical scrutiny. The volume can overwhelm teams and create blind spots. Such systems can produce deceptively coherent outputs that mask embedded errors, inconsistencies, or biases.

This is why transparency and traceability need to be built into these tools, rather than being an afterthought. We must ensure that workflows promote critical thinking and align with the practice’s goals. The objective isn’t to replace human judgment but to elevate it with the right level of oversight.

Furthermore, we have the usual issues around data, whether it’s the lack of domain specificity in the data that has trained these foundational models, or issues around IP and legal frameworks. One of the most important challenges, as creators, is how we position our value and creativity within these processes.

How do you decide where AI fits in the design process and where it doesn’t?

Design decisions are rarely binary, but sadly conversations around AI often limit things to two options: augment or automate.

In practice, it is much more nuanced. We look at factors such as complexity, risk, traceability, and creative intent to decide where AI can meaningfully contribute and where human judgment and intervention is essential.

Sometimes, the right answer isn’t AI at all. A good example of this is Cyclops, our GPU accelerated simulation tool for the built environment, which is now publicly available for anyone to use. Initially, we explored AI models to predict building performance in early-stage design. Although they were fast, they lacked the certainty and repeatability that we needed. In some contexts, this might be acceptable. But in high-stakes contexts such as environmental analysis for the built environment, results must be consistent and grounded in known principles, not just probable guesses.

So instead, we used parallel computing on GPUs to massively speed up physics-based simulations. The result? Thousands of working hours saved, with precise, traceable, benchmarked outcomes that make sustainability and performance assessments faster and more actionable. This is why defining goals and mapping current processes to establish a baseline, should always precede choosing a tool or technology.

How do you see technology changing the design process in the future?

It’s hard to predict exactly, but two things are certain: the pace of change is accelerating and our ability to learn and adapt will matter more than any single tool.

The key is creating the right conditions for responsible experimentation, where data is shareable, infrastructure is secure, and workflows align with legal and ethical frameworks. Without that we risk chasing shiny tools with no clarity, continuity, or control.

At the same time, we are witnessing a deeper shift: we are now transitioning from AI-aware to AI-native ways of working. In an AI-native practice, technology doesn’t just assist, but rather shapes how we design, collaborate, document, educate and more importantly how we define value.

Eventually, we will stop asking “how do you use AI?” just as no one asks how you use electricity or the internet anymore. It will simply be embedded in how designs happen. It is fascinating to think about what kind of future this could enable.

The Column: Laura Narvaez Zertuche on Urban AI

2025

Foster + Partners launches Cyclops plugin to enhance sustainable design

2025