A new peer-reviewed article co-authored by Thea Lovise Ahlgren, Helene Fønstelien Sunde, Kai-Kristian Kemell, and Anh Nguyen-Duc has been published in Information and Software Technology (Elsevier).
The paper, titled “Assisting Early-Stage Software Startups with LLMs: Effective Prompt Engineering and System Instruction Design”, investigates how large language models (LLMs) can be adapted to support early-stage software startups through tailored prompt engineering and system instruction design—without requiring model retraining.
The study introduces StartupGPT, an LLM-based assistant developed using design science methodology. StartupGPT was evaluated with 25 startup practitioners across five key use cases, including MVP development, product planning, market analysis, funding support, and business model generation.
Findings show that well-designed prompts and system instructions significantly improved user satisfaction and perceived effectiveness, with evaluation scores reaching:
-
Satisfaction: 93.33%
-
Effectiveness: 80%
-
Efficiency: 80%
-
Reliability: 86.67%
However, the study also identifies areas for further development, including better context retention, personalization, communication tone, and reference sourcing.
Read the full open-access article here: https://doi.org/10.1016/j.infsof.2025.107832