Internal assessment

For the new IB Diploma Computer Science syllabus to start teaching in August 2025, and for first examinations in May 2027.

Guide to the IA

The guide to the IA that I wrote for the previous syllabus was a very popular download. I want to thank all the teachers and students who sent me messages over the years for the encouragement. It’s clear that there was a need for such a thorough guide and many people found it very helpful.

With the support of my co-authors, I have developed a new, updated guide for the new Internal Assessment, which has been added as a chapter to the Hachette book, “Computer Science for the IB Diploma”.

The new IA guide contains a 40 page chapter for the Internal Assessment. For each of the 5 criteria, it includes:

  • IB assessment criteria statements
  • Top tips
  • Common mistakes
  • Exemplar samples from real student IA’s, re-worked for the new syllabus (separate to the IB authored samples)
  • Detailed checklists

Those who found my guide to the IA for the previous syllabus useful, should find the chapter in the Hachette textbook very helpful for navigating the new IA.

Sample IAs

The following are the offical sample IA’s published by the IBO for the new course. They have been marked by IB moderators with comments provided but are subject to the following fine-print from the IB:

Although the examples have been standardized by experienced Diploma Programme (DP) computer science teachers, these example computational solutions, their marks and examiner comments are for guidance only. They are not definitive and have not been generated through the International Baccalaureate’s (IB) process of standardization. These examples will be replaced with authentic examination examples following first assessment in 2027.

  • Example 1: Birthday card generator (24/30)
  • Example 2: Quiz game (17/30)
  • Example 3: Chat room (18/30)
  • Example 4: Car management system (23/30)
  • Example 5: Currency converter (16/30)
  • Example 6: IB timetable (26/30)
  • Example 7: Trip organizer (22/30)
  • Example 8: Flappy bird (13/30)

My students can access them directly via Google Drive (STC accounts only). As these materials are IBO copyrighted, I can not share with other teachers and students, so please don’t request drive access. IB CompSci teachers can download these for their own classes from the IBO resource center

Suggested timeline for the IA

The following is my anticipated breakdown of the 35 hours of class time for this IA…

Important note: I am not planning to deliver these lessons as one uninterrupted sequential block. For instance, lesson 1 will be delivered by itself with at least a two week gap before lesson 2 so students can research into their ideas at home before needing to submit a proposal.

Lesson Teaching and learning
1 Assessment overview
Advice on project selection
Review of exemplar projects
Research ideas
2 Submit project proposal
3,4 Scenario, context, success criteria
5,6 Intial planning
UML, structure diagram, gantt
7-12 System overview
1 lesson for each of…
UX diagrams
Flowcharts
UML overview
Extras such as case diagram, DFD, networking diagram, ML modeling etc
Functional testing
Structural testing
13-30 Self directed programming time
31-33 Development documentation and video
34,35 Evaluation

Assessment criteria

Criterion A: Problem specification (4 marks)

The problem specification is the starting point of the solution and must be used as a basis for the development of the product.

  • The student should have the necessary technical skills, access to appropriate hardware and software, and the availability of relevant data to address the problem.
  • The success criteria identified in the problem specification (assessed by criterion A) will be used in the planning (assessed by criterion B), in the development (assessed by criterion D) and in the evaluation (assessed by criterion E)

The recommended word count for this criterion is 300 words.

Marks Description
0 The response does not reach a standard described by the descriptors below.
1–2 Outlines a problem scenario.
States limited success criteria.
Outlines the nature of the solution in a computational context
3-4 Describes the problem scenario in terms of its measurable solution requirements.
States appropriate success criteria.
Explains the choice of computational context for the solution.

Clarifications

  • The problem scenario is a clear description of the problem including its measurable solution requirements. The description should relate directly to the problem whether this be in the world around us, other fields of knowledge or a current issue in computing.
  • Success criteria are measurable outcomes derived from the solution requirements that indicate the successful development of the product.
  • The computation context is the specific area of computing that is selected to be used in the solution.

Criterion B: Planning (4 marks)

The planning of the product must be consistent with the problem specification in criterion A.

  • This criterion assesses how the problem scenario has been decomposed into component parts.
  • The plan should address the requirements of the solution, in terms of the success criteria, and include a proposed chronology for the steps involved in planning, designing, developing, testing and evaluating the solution.
  • A plan can be presented in different forms but diagrams such as GANTT and AGILE charts can effectively support the planning process
  • The plan may include any relevant research such as the use of existing code libraries.

The recommended word count for this criterion is 150 words.

Marks Description
0 The response does not reach a standard described by the descriptors below.
1–2 Constructs a partial decomposition of the problem scenario.
Constructs a plan that addresses some of the success criteria of the solution.
3–4 Constructs a reasonable decomposition of the problem scenario.
Constructs a plan that addresses the success criteria of the solution.

Clarifications

  • Decomposition is the breaking down of the problem scenario identified in criterion A into smaller, more manageable sub-problems or components The decomposition can be effectively constructed using diagrams.
  • A reasonable decomposition breaks the problem down into essential components that support the construction of a plan.

Criterion C: System overview (6 marks)

The system overview of the product must be consistent with the problem specification in criterion A, and the planning in criterion B.

  • The system overview should include a system model with the key components, their relationships, the rules governing their interaction and the algorithms required by these components and the user interface.
  • The system overview should have the clarity to enable a third party to re-create the product.
  • The system model will provide the information for a viable testing strategy.

The recommended word count for this criterion is 150 words.

Marks Description
0 The response does not reach a standard described by the descriptors below.
1–2 Outlines a limited system model.
Identifies algorithms for the components of the system model.
Identifies a testing strategy for at least one success criterion.
3–4 Constructs a system model that is not complete.
Constructs algorithms for the components of the their model that lead to partial functionality of the product.
Outlines a testing strategy that aligns with at least three success criteria.
5–6 Constructs a complete system model.
Constructs algorithms the components of the system model that enable the product to perform
Describes a testing strategy that aligns with the success criteria.

Clarifications

  • A system model consists of diagrams that include the components of the system and how they are connected. The system model will include the design of the User Interface. A complete system model does not include the algorithms for each of the components.
  • Algorithms can be presented in different forms including natural language, flow charts or pseudocode, and should address the individual components of the system model.
  • The testing strategy refers to a systematic approach for evaluating whether the computational solution works as intended. The testing strategy should ensure that code functions correctly and handles unexpected or incorrect inputs. This can be represented effectively in a table with proposed test data and expected outcomes.

Criterion D: Development (12 marks)

The development of the product must be consistent with the problem specification in criterion A, the planning in criterion B and the system overview developed in criterion C.

  • The video must provide evidence of the functionality and give examples of the testing of the product.
  • The development of the solution must justify the structure of the product, why it is appropriate and demonstrate the techniques used to develop the product based on the algorithms constructed in criterion C. These techniques may include loops, data structures, existing libraries and the integration of software tools.
  • The testing strategy must include testing for correctness, reliability, and efficiency. The testing must be described and justified in the documentation with supporting examples seen in the video.
  • The recommended word count for this criterion is 1000 words.

Criterion D assesses the development of the product (12 marks)

Marks Description
0 The response does not reach a standard described by the descriptors below.
1–3 Constructs a product with very limited functionality.
Constructs a product using no appropriate techniques to implement their algorithms.
States the choices made to implement their algorithm.
States the testing strategy used.
4-6 Constructs a product that has limited functionality.
Constructs a product using at least one appropriate technique to implement their algorithms.
Outlines the choices made to implement their algorithm.
States the effectiveness of the testing strategy.
7-9 Constructs a product that has partial functionality.
Constructs a product that uses some appropriate techniques to implement their algorithms.
Explains the choices made to implement their algorithm.
Describes the effectiveness of the testing strategy.
10-12 Constructs a fully functional product.
Constructs a product that uses appropriate techniques to implement their algorithms.
Evaluates the choices made to implement their algorithm.
Justifies the effectiveness of the testing strategy.

Clarifications

  • Implementation and coding of the algorithms: Techniques in the criteria refer to the process of programming algorithms using code. The documentation must highlight key elements of code that are important for the efficient functioning of the algorithms. Any code presented in the solution must include relevant comments, be consistent and be readable. Code excerpts included in the documentation must be referenced to the full source code submitted as an appendix.
  • The video must demonstrate the functionality of the product. The deployment of the testing strategy and its effectiveness must be described in the documentation with examples of the testing seen in the video.

Criterion E: Evaluation (4 marks)

The evaluation of the product must be consistent with the problem specification and success criteria in criterion A.

The recommended word count for this criterion is 400 words.

Marks Description
0 The response does not reach a standard described by the descriptors below.
1–2 States the extent to which the success criteria were met.
Describes improvements to the product.
3–4 Evaluates the extent to which the success criteria were met.
Justifies improvements to the product.

AI in the Computer Science IA

The Computer Science IA is no different than any other IB assessment.

It is subject to the same Academic Integrity Policy as everything else, including Appendix 6: Guidance on the use of artificial intelligence tools (2023).

It states: …students need to be aware that the IB does not regard any work produced—even only in part—by such tools to be their own. Therefore, as with any quote or material from another source, it must be clear that any AI-generated text, image or graph included in a piece of work has been copied from such software. The software must be credited in the body of the text and appropriately referenced in the bibliography. If this is not done, the student would be misrepresenting content—as it was not originally written by them—which is a form of academic misconduct.

Does this mean I can use LLM generated programming code in my IA?

That’s the same as asking: Can you use code sourced from Google, Stackoverflow, or Github? Sure! If it is approproiately referenced, you can use some externally sourced code. Would you want your entire IA to be code from these sources? No!

Externally sourced code, properly cited and in moderation, is perfectly fine and expected. That said, the vast majority of the programming code should be your own independently produced work. The IA is an opportunity for you to showcase your personal programming expertise, not your ability to copy-and-paste someone elses code.

TL;DR…: AI tools can be used and are treated the same as any other resource that students may use. They should be fully and properly cited and referenced at all times, failure to do so risks breaching the Academic Integrity Policy.


Copyright © Paul Baumgarten.