Secure Software Development
This module focused on the security risks associated with programming, future trends in secure software development and software architecture.
Teamwork Evidence
As teamwork evidence, we have recorded the videos of our team meetings that can be found below.
Unit 1: Introduction to Secure Software Development
In this first unit we were introduced to the core concepts at the heart of Secure Software Development. With a particular focus on outlining different approaches, their strenghts and weaknesses in relation to software development and architecture.
The main goal of this unit was to understand and familiarise ourselves with the industry standards and best practices which are utilised in the workplace.
The Initial discussion post can be found below:
View Activity Discussion
Unit 2: UML Modelling to Support Secure System Planning
In this unit, the focus was on the use of UML diagrams to support the planning of a secure system. We gained hands-on experience with creating UML models and understood how they can be used to better the communication amongst team members. I look forward to using this knowledge in my team projects at work.
Unit 3: Programming Languages: History, Concepts & Design
In this unit, we explored the history of programming languages, from Alan Turing's work to early languages like Cobol and Fortran, which shaped modern design. We then put theory into practice with Python, learning key concepts like inheritance and polymorphism. The module also covered crucial security best practices to defend against common vulnerabilities. Finally, we learned about design patterns as templates for solving recurring software challenges. We got an overview connecting programming theory with practical, secure coding skills.
This was an interesting yet busy unit, I particularly enjoyed the historical aspect of it.
Unit 4: Exploring Programming Language Concepts
In this module we focused on regular expressions and recursion. Specifically on their security implications, learning not just how to use them, but also the security risks they can introduce. The goal is to understand the pros and cons of each approach, as an aid to writing more secure and effective code.
Additionally, we completed an activity collateral to the reading of Weidman, A. (no date) Regular expression Denial of Service which can be found below:
A regular expression becomes an "Evil Regex" when vulnerable to Regular Expression Denial of Service (ReDoS), an algorithmic complexity attack. This vulnerability arises from "catastrophic backtracking," where the regex engine's evaluation time scales exponentially with input length, consuming excessive CPU resources and rendering systems unresponsive, as famously occurred with Stack Overflow in 2016. The root cause often lies in complex patterns like nested quantifiers (e.g., `(a+)+`) or polynomial overlapping adjacency. Such regexes are notoriously difficult to debug and maintain, and their security risks are often overlooked. The increasing use of AI-powered assistants, which can generate insecure expressions, further exacerbates this problem, as developers may inadvertently integrate vulnerable code. Effective mitigation involves a multi-pronged strategy: transitioning to ReDoS-safe engines like Google's `re2`, employing detection tools, validating input length, and implementing execution timeouts. Although indispensable for security tasks like input validation and sanitisation, the inherent complexity of regular expressions makes them a double-edged sword, frequently introducing denial-of-service vulnerabilities. Consequently, continuous developer education on secure coding practices and rigorous testing of all regex patterns, especially those generated by AI, are paramount to building resilient software.
Unit 5: An Introduction to Testing
In this unit, our focus was on software testing, with a special emphasis on security. We learned essential terminology, various testing techniques, and important industry standards like OWASP. The main goal was to understand how to design effective test plans that could identify and prevent security breaches. We explored practical Python tools to automate testing, including using linters.
Equivalence Testing in Python:
This script demonstrates a concept called equivalence partitioning, which is a way of sorting a collection of items into groups based on a specific rule. The rule for this example states that two numbers, x and y, are considered equivalent if the difference between them is divisible by 4.
This is checked with the code (x - y) % 4 == 0. The script applies this rule to a set of integers from -3 to 4, and then checking if each is divisible by 4.
The result is the creation of four distinct groups where all numbers in a group are equivalent to each other under that rule. For instance, the numbers 1 and -3 are placed in the same group because their difference is 4, which is divisible by 4. The output first lists the groups ({0, 4}, {1, -3}, {2, -2}, {3, -1}) and then shows how each individual number from the original set maps into one of these groups.
Unit 6: Using Linters to Support Python Testing
In this module, our focus was on using Python technologies to develop high-quality and secure code. We delved into linters, learning how to apply different ones, appreciating their unique contributions and how to use them in various scenarios. The main objective being providing us with the tools to develop Python code so that it would be free from error and designed consistently.
I found the linterst to be a particularly interesting topic, as it is something which I have been using and I feel that I now grasp much better.
In this module, we also submitted our first assignment which consisted of a Design Document prepared by our team, and a peer review.
Unit 7: Introduction to Operating Systems
In this unit, our focus was on Operating Systems and their relationship with programming and security. We learned the core functions of a typical OS, looking at common examples and the key differences between processes, threads, and schedulers. We also discussed crucial approaches for enhancing OS security and compared various virtualisation methods. A significant part of the week was understanding the distinction between dynamic and shared libraries, recognising the security implications they have when our applications need to interact with the operating system.
What is an Ontology?:
The software development lifecycle incorporates critical testing phases to ensure system integrity. For a complex application like an e-commerce platform, integration testing is essential, verifying the interoperability of components such as the product catalogue, shopping cart, and payment gateway. This phase confirms seamless data flow and interaction, utilising test doubles like mocks and stubs to create controlled environments. Mocks can simulate external payment services, enabling rigorous testing of transaction logic without real financial processing, while stubs can provide predefined responses from an inventory system that is not yet complete. Following successful integration, system testing evaluates the entire e-shop as a cohesive whole. This holistic assessment validates end-to-end functional workflows, from user registration to order fulfillment, and assesses non-functional requirements such as performance under high user loads and security against potential threats. The final stage, acceptance testing, ensures the platform meets business objectives and user expectations. User Acceptance Testing (UAT) involves target users performing real-world scenarios to validate usability and functionality, often through beta testing. Concurrently, Operational Acceptance Testing (OAT) confirms the e-shop's operational readiness, scrutinising procedures for deployment, backup, and recovery. A similar multi-layered testing strategy is fundamental to delivering a robust, reliable, and secure e-commerce experience.
Unit 8: Cryptography and Its Use in Operating Systems
In this unit, our focus was on cryptography, exploring its principles, technology, and use with operating systems. We examined the meaning of cryptography and its application through a case study involving OS integration. A significant part of our learning was dedicated to exploring common cryptographic libraries and building a basic application that used these libraries to encode sample data.
Below, the link to the discussion centered around TrueCrypt can be found:
View Activity Discussion
Unit 9: Developing an API for a Distributed Environment
For this module, our focus was on practical Python development skills crucial for our summative assessment. We built an API and used it to create and read records, which involved investigating CRUD capabilities and expanding our knowledge of Python libraries. We also reflected on the utility of an ontology in a distributed architecture. The key outcome was creating an API for file management, a significant step that involved bridging our secure back-end code with a user interface, expanding our skills from server-side development to include UI interaction.
Unit 10: From Distributed Computing to Microarchitectures
In this unit, our focus was on system architectures, from monolithic deployments to microservices and virtualisation. We explored their evolution, strengths, and weaknesses, with a strong emphasis on security. We learned how distributed systems increase the attack surface and examined security attacks specific to virtual environments. A key takeaway was the importance of encryption and key distribution to protect data in modern, distributed applications. This brought the module into a timely and relevant context of distributed operations.
Faceted Data:
The concept of "faceted data" provides a robust, language-based methodology for enforcing information flow security to prevent data leakage. As introduced by Schmitz et al. (2016), this paradigm centres on "faceted values"—data constructs that dynamically adapt their behaviour and representation based on the privilege level of the observing context. This mechanism ensures sensitive information is only exposed to authorised entities, offering a strong confidentiality guarantee that is independent of application-level bugs. The primary strength of this model is its ability to enforce security at a fundamental level, decoupling information flow from application logic. However, the paradigm presents considerable implementation challenges, particularly when translating it from functional languages like Haskell, which use sophisticated constructs such as control and data monads. In object-oriented languages like Python, simulating this behaviour would conceptually require wrapper classes and context managers to track observer privileges, introducing significant architectural overhead and potential performance impacts. While a direct translation is complex, the core idea of context-aware data representation remains a powerful strategy for building secure systems, though its practical adoption requires navigating a steep conceptual curve and substantial engineering effort.
Unit 11: Future Trends in Secure Software Development
Furthermore, we submitted our second assignment which consisted in an indipendently developed CLI Python application. The design document previously submitted as its starting point.
Below, the link to the debate on Microservices and Microkernels can be found:
View Activity Discussion
Unit 12: The Great Tanenbaum-Torvalds Debate Revisited
For the last unit, our focus was on the historic Tanenbaum-Torvalds debate. We revisited their 1990s arguments over monolithic versus microkernel operating systems, considering them in both their original context and today's security-conscious world. We analysed how modern trends like microservices and constant cyber-attacks influence these design choices. This debate prompted us to critically re-evaluate our opinions on monolithic versus modular system architectures, thinking about which approach is best suited for the challenges of today's distributed and security-focused computing landscape.
Additionally, we submitted our third assignment, which consisted in the completion of this e-portfolio.