Books

Cover: Softwaretechnik. Mit Fallbeispielen aus realen Entwicklungsprojekten

T. Grechenig, M. Bernhart, R. Breiteneder, K. Kappel (Hrg.): “Softwaretechnik. Mit Fallbeispielen aus realen Entwicklungsprojekten”; Pearson Studium, München, Germany, 2009, ISBN: 978-3-8689-4007-7; 688 S.

Software ist ein bemerkenswertes Phänomen unserer Zeit. Ihr Auto, Ihr Büro, Ihr Handy, Ihre Firma, Ihr DVD-Player, Ihre Stadtverwaltung, Ihre Airline: sie alle arbeiten oder funktionieren heute mit Hilfe von Softwaresystemen. Sie alle könnten den gewohnt hohen Grad an Dienstleistung ohne Software gar nicht mehr erfüllen.Wirtschaftlich gesehen wird Software immer mehr zur Grundressource. Sie sprengt die feine Unterscheidung der Wirtschaftswissenschaften zwischen Ware, Gut und Produkt. Software wird zur “Commodity”. Jeder gute Softwaretechniker wird zum Rohstoffproduzenten.
Softwaretechnik ist die Kunst, das Handwerk und die Industrie, diesen teuren Rohstoff zu erzeugen. Softwaretechnik ist aus der Sicht des Fachexperten jene hohe Ingenieurkunst, gute Software in der geplanten Zeit, im Rahmen des vorgesehenen Budgets in einem Team zu planen, zu bauen und zum erfolgreichen Einsatz zu bringen.
Das vorliegende Buch ist eine realitätsnahe Einführung in die Softwaretechnik. Es erläutert, wie man Softwareprojekte methodisch ohne größere Risiken zum Erfolg führt. Es vermittelt das Grundgerüst für den Bau solider Softwaresysteme, liefert eine ausgewogene Gesamtprojektsicht und stellt eine Guideline dar, wie man moderne Software-Entwicklungsprojekte mit den Aufwandsgrößen von 12 Personenmonaten (z.B. ein Web-Shop für regionale Produkte) über 120 (z.B. die betriebliche Verwaltung eines Mittelständers) bis 1200 (z.B. Sparbuch- und Anlageverwaltung einer großen Bank) nach dem technischen State-of-the-Art angemessen ausführt.
Das Buch richtet sich an Studierende der Fachrichtung Informatik, Wirtschaftsinformatik, Medieninformatik, Wirtschaftsingenieurwesen, Maschinenbau, Elektrotechnik sowie Software-Interessierte anderer Studienrichtungen. Das Buch ist auch für Leser nutzbringend, die bereits in der Berufspraxis stehen, angehende Projektleiter oder Teammitglieder in Softwareprojekten sind oder einfach ihr Wissen auffrischen wollen.

Cover: Campus-Management Systeme als Administrative Systeme Basiswissen und Fallbeispiele zur Gestaltung und Einführung

T. Spitta, M. Carolla, H. Brune, T. Grechenig, S. Strobl, J. vom Brocke (Hrg.): “Campus-Management Systeme als Administrative Systeme Basiswissen und Fallbeispiele zur Gestaltung und Einführung”; essentials Springer Vieweg, Wiesbaden, 2015, ISBN: 978-3-658-11594-4; 37 S.

Die Autoren vermitteln, was den Softwaretyp Campus-Management ausmacht, welche Funktionen zum Kern eines CMS gehören und welche nicht benötigt werden. Sie skizzieren die Datenbasis solcher Systeme in einem Referenz-Datenmodell. Vertiefend wird dieses Modell im Buch Ein Referenz-Datenmodell für Campus-Management-Systeme in deutschsprachigen Hochschulen (Carolla 2015) empirisch validiert beschrieben. Neben den Charakteristika Administrativer Systeme im Hochschulbereich illustrieren die Autoren die Einführung zweier produktiver CMS, die als Eigenentwicklungen in Bielefeld und Wien entstanden sind.

Der Inhalt:

  • Das Teilsystem Lehre
  • Die Teilsysteme Forschung und „Wissens-Dienste“
  • Datenmodell eines CMS
  • Fazit Administrative Systeme und Fallstudien

Talks

JSUS/JUGAt Java User Group Meeting, TU Vienna, Austria, 2018-01-15: “The cool and the cruel of MicroServices”; https://www.meetup.com/de-DE/Java-Vienna/events/246397740/; Recording

MicroServices sind in aller Munde und werden in letzter Zeit in vielen neuen Projekten verwendet. Aber wo sind die Grenzen und ab wann macht eine MicroService Architektur tatsächlich Sinn? Zu dieser Frage möchte ich einen kritischen Diskurs der Vor- und Nachteile aktuell moderner Architekturansätze starten.

Brühl, Germany, 2018-03-13..15: “Das große Apache-Enterprise und Microservice-Puzzle”; https://programm.javaland.eu/2018/#/scheduledEvent/549003

Viele Apache-Projekte genießen zurecht den Ruf, exzellente kleine Helferlein für große Aufgaben zu sein. Dies trifft auch für viele Projekte im Java Enterprise zu. Die Apache Software Foundation bietet eine Fülle an Projekten, die normalerweise ihren Dienst in großen Applikationsservern und Multi-Millionen-Frameworks verrichten. Man kann damit aber auch nur wenige Megabyte große Microservices bauen weil Enterprise und Microservices ja nicht zwingend ein Widerspruch sein muss. Dieser Vortrag gibt einen Überblick über die verschiedenen ASF-Projekte in diesem Bereich und wie sie sinnvoll eingesetzt werden können. Ein weiterer Schwerpunkt liegt beim kritischen Diskurs der Vor- und Nachteile aktuell moderner Architekturansätze.

GeeCON 2018, Krakow, Poland, 2018-05-09..11: “The cool and the cruel of MicroServices”; https://2018.geecon.org/speakers/info.html?id=367; Recording

Over the last few years everyone has been raving about MicroServices and how they will make any developers life so much better. Did this promise deliver? What was the reason we invented MicroService in the first place? Where are the barriers and for which scenarios does it pay off to use MicroServices? And in which situations do you better resist the temptation to always use the latest and greatest new hyped tools like MicroServices? Let’s find out! In this talk I’ll also like to share some feedback from various big real-world projects where this approach did sometimes work - and sometimes miserably failed… The target audience are managers and developers alike. I’ll not dive into technical details except when it’s necessary to understand the shortcomings of a certain design.

FOSS Backstage, Apache Roadshow, Berlin, Germany, 2018-06-13..14: “Jakarta EE and the road ahead for the ASF”; https://apachecon.com/euroadshow18/JakartaEEatAFS.pdf

The Java Enterprise world went through wild ups and downs lately. But where do we go from here? And what does the ASF have to do with all that? Let’s find out! This talk will give an overview of a bunch of ‘Enterprise’ projects at the ASF, what their current state is and when to use them. We’ll then shed a light on Apache Meecrowave for MicroServices and MicroProfile appliations and finally move over to Apache TomEE 8 for serving classic enterprise applications while still aiming for simplicity and performance. This talk includes a basic setup and introduction to building apps with the platforms mentioned above.

FOSS Backstage, Apache Roadshow, Berlin, Germany, 2018-06-13..14: “The cool and the cruel of MicroService”; https://apachecon.com/euroadshow18/Cool-and-Cruel-MicroServices.pdf; Recording

Over the last few years everyone has been raving about MicroServices and how they will make any developers life so much better. Did this promise deliver? What was the reason we invented MicroService in the first place? Where are the barriers and for which scenarios does it pay off to use MicroServices? And in which situations do you better resist the temptation to always use the latest and greatest new hyped tools like MicroServices? Let’s find out! In this talk I’ll also like to share some feedback from various big real-world projects where this approach did sometimes work - and sometimes miserably failed.

Other Publications

M. Struberg: “Understanding JakartaEE”;JAXEnter.com article, 2018-08-22

R. Vallon, S. Strobl, M. Bernhart, R. Prikladnicki, T. Grechenig: “ADAPT: A Framework for Agile Distributed Software Development”; in: “ IEEE Software (Volume: 33, Issue: 6, Nov.-Dec. 2016 )”

A growing number of developers are using agile practices in distributed software projects. Researchers have created the ADAPT (Agile Distributed Adaptable Process Toolkit) framework to guide the implementation of agile practices in distributed environments. The Web Extras detail the research methods the authors employed.

S. Strobl, M. Zoffi, M. Bernhart, T. Grechenig: “A Tiered Approach Towards an Incremental BPEL to BPMN 2.0 Migration”; in: “2016 IEEE International Conference on Software Maintenance and Evolution”

This report describes the challenges and experiences with the incremental migration of a BPEL to a BPMN 2.0 process engine. The transition is motivated by a strategic reorientation towards the new standard as well as end of life of the previous product. The solution reflects the preliminary steps of integrating the new platform into the existing application and support for parallel operation. This paper further describes the incrementally executed reverse engineering of process definitions and migration of instances by applying four different, tiered strategies in an economically viable way. The report concludes by detailing the lessons learned to provide additional guidance for attempts to apply the detailed approach.

H. Erben, R. Galler, T. Grechenig: “MineralBay – the portal for raw materials and projects from subsurface construction”; in: “Geomechanics and Tunnelling” (August 2015), Volume 8, Issue 4, pages 321–332

In order to achieve higher utilisation rates for the material excavated from underground construction sites, one of the main objectives is efficient, digital processing of available data on material, mass and time parameters for the mined rock. Selected information needs to be made available to a wide audience at the same time. This initiates a value chain, which goes far beyond the construction industry and enables successful upcycling. The evolving software MineralBay is committed to this goal by using the internet to bring together owners or suppliers of mineral resources with customers easily, quickly and at any time to facilitate the exchange and trading of raw materials. MineralBay is a management and merchandising system for excavated material, whose speciality is access to and processing of real-time data, e.g. in the form of online material analysis results from tunnel boring machines. The information obtained is used for quality management of already placed excavation material and for gapless documentation of the material flow from the beginning to the end of the construction project.

A. Mauczka, F. Brosch, C. Schanes, T. Grechenig: “Dataset of Developer-Labeled Commit Messages”; in: “Proceedings of the 12th Working Conference on Mining Software Repositories (MSR)”, IEEE, (2015), ISBN: 978-0-7695-5594-2; S. 490 - 493.

Current research on change classification centers around automated and semi-automated approaches which are based on evaluation by either the researchers themselves or external experts. In most cases, the persons evaluating the effectiveness of the classification schemes are not the authors of the original changes and therefore can only make assumptions about the intent of the changes. To support validation of existing labeling mechanisms and to provide a training set for future approaches, we present a survey of source code changes that were labeled by their original authors. Seven developers from six different project applied three existing classification schemes from current literature to enrich their own changes with meta-information, so the intent of the changes becomes more evident. The final data set consists of 967 classified changes and is available as an SQLite database as part of the MSR data set.

N. Ilo, J. Grabner, T. Artner, M. Bernhart, T. Grechenig: “Combining software interrelationship data across heterogeneous software repositories”; in: “Proceedings of the IEEE International Conference on Software Maintenance and Evolution (ICSME 2015)”, IEEE, (2015), ISBN: 978-1-4673-7532-0; S. 571 - 575.

Software interrelationships have an impact on the quality and evolution of software projects and are therefore important to development and maintenance. Package management and build systems result in software ecosystems that usually are syntactically and semantically incompatible with each other, although the described software can overlap. There is currently no general way for querying software interrelationships across these different ecosystems. In this paper, we present our approach to combine and consequently query information about software interrelationships across different ecosystems. We propose an ontology for the semantic modeling of the relationships as linked data. Furthermore, we introduce a temporal storage and query model to handle inconsistencies between different data sources. By providing a scalable and extensible architecture to retrieve and process data from multiple repositories, we establish a foundation for ongoing research activities. We evaluated our approach by integrating the data of several ecosystems and demonstrated its usefulness by creating tools for vulnerability notification and license violation detection.

T. Spitta, M. Carolla, H. Brune, T. Grechenig, S. Strobl, J. vom Brocke: “Campus-Management Systeme”; Informatik Spektrum, 38 (2015), 1; S. 59 - 71.

Based on an comprehensive literature view and practical experiences with administrative software over years the paper figures out why campus-management systems are heavy-weighted transactional systems. They are rather new in German speaking countries and therefore the organizations implementing them have not enough experiences with the subject and the large size of projects in work. Two cases studies are shown.

T. Spitta, T. Grechenig, H. Brune, M. Carolla, S. Strobl: “Campus Management Systems as Administrative Software Systems”; Bielefeld Working Papers in Economics and Management, SSRN (2014), 06; 17 S.

Caused by a politically initiated break in German speaking European countries – the so-called Bologna Process – we observe a huge demand for new information systems supporting the academic processes of teaching and research. The software qualitatively demanded is not available on the market. Some systems are large projects of pilot-systems in pioneer universities. Because universities – in contrast to enterprises – have little experience in implementing and operating such systems, it seems to be worth while to examine the essentials of organizational information systems basically. After Lehman’s definition of embedded systems 35 years ago, we look at very complex systems, embedded into large organizations. The complexity of such system’s software stems from its database, created and maintained by the the organization’s users. We argue, from our basic view at original data, which functions are part of the core of a campus management system (CaMS) and which are not. E. g. E-learning or library do not belong to this core, but need secure and efficient interfaces to it. Because CaMS are large and expensive they should be implemented into an organization evolutionary.

R. Vallon, S. Strobl, M. Bernhart, T. Grechenig: “Inter-organizational Co-development with Scrum: Experiences and Lessons Learned from a Distributed Corporate Development Environment”; in: “Agile Processes in Software Engineering and Extreme Programming”, H. Baumeister, B. Weber (Hrg.); Springer Lecture Notes in Business Information Processing, 149 (2013), ISBN: 978-3-642-38313-7; S. 150 - 164.

Distributed development within a single organization adds a lot of overhead to every software development process. When a second organization joins for co-development, complexity reaches the next level. This case study investigates an agile approach from a real world project involving two unaffiliated IT organizations that collaborate in a distributed development environment. Adaptations to the regular Scrum process are identified and evaluated over a six-month-long period of time. The evaluation involves a detailed problem root cause analysis and suggestions on what issues to act first. Key lessons learned include that team members of one Scrum team should not be distributed over several sites and that every site should have at least one Scrum master and one product owner.

R. Vallon, K. Bayrhammer, S. Strobl, M. Bernhart, T. Grechenig: “Identifying Critical Areas for Improvement in Agile Multi-site Co-development”; in: “Proceedings of the 8th International Conference on Evaluation of Novel Approaches to Software Engineering”, L. Maciaszek, J. Filipe (Hrg.); SciTePress, (2013), ISBN: 978-989-8565-62-4; S. 7 - 14.

Agile processes potentially ease distributed software development by demanding regular communication and self-management of virtual team members. However, being designed for collocated teams, extensions to the regular process need to be made. We investigate critical areas of improvement based on a case of distributed Scrum involving two unaffiliated Austrian IT organizations that collaborate to build software. We identified eight critical areas for improvement originating from interviews, retrospective meetings and an in-depth case analysis. Key suggestions for practice include the establishment of long-lived single-site Scrum teams and the application of Behavior Driven Development (BDD) to make implicit requirement knowledge explicit and transparent to all of the distributed parties.

M. Bernhart, T. Grechenig: “On the understanding of programs with continuous code reviews”; in: “Proceedings of the 21st International Conference on Program Comprehension”, H. Kagdi et al. (Hrg.); Conference Publishing Consulting, Passau, Germany (2013), ISBN: 978-1-4673-3092-3; S. 192 - 198.

Code reviews are a very effective, but effortful quality assurance technique. A major problem is to read and understand source-code that was produced by someone else. With different programming styles and complex interactions, understanding the code under review is the most expensive sub-task of a code review. As with many other modern software engineering practices, code reviews may be applied as a continuous process to reduce the effort and support the concept of collective ownership. This study evaluates the effect of a continuous code review process on the understandability and collective ownership of the code base. A group of 8 subjects performed a total of 114 code reviews within 18 months in an industrial context and conducted an expert evaluation according to this research question. This study concludes that there is a clear positive effect on the understandability and collective ownership of the code base with continuous code reviews, but also limiting factors and drawbacks for complex review tasks.

A. Mauczka, A. Huber, C. Schanes, W. Schramm, M. Bernhart, T. Grechenig: “Tracing your maintenance work - a cross-project validation of an automated classification dictionary for commit messages”; in: “Proceedings of the 15th international conference on Fundamental Approaches to Software Engineering (FASE’12)”, Springer-Verlag, Berlin, Heidelberg (2012), ISBN: 978-3-642-28871-5; S. 301 - 315.

A commit message is a description of a change in a Version Control System (VCS). Besides the actual description of the change, it can also serve as an indicator for the purpose of the change, e.g. a change to refactor code might be accompanied by a commit message in the form of “Refactored class XY to improve readability”. We would label the change in our example a perfective change, according to maintenance literature. This simplified example shows how it is possible to classify a change by its commit message. However, commit messages are unstructured, textual data and efforts to automatically label changes into categories like perfective have only been applied to a small set of projects within the same company or the same community. In this work, we present a cross-project evaluated and valid mapping of changes to the code base and their purpose that is usable without any customization on any open-source project. We provide further the Eclipse Plug-In Subcat which allows for a comfortable analysis of projects from within Eclipse. By using Subcat, we are able to automatically assess if a commit to the code was e.g. a bug fix or a refactoring. This information is very useful for e.g. developer profiling or locating bad smells in modules.

M. Bernhart, S. Strobl, A. Mauczka, T. Grechenig: “Applying Continuous Code Reviews in Airport Operations Software”; in: “Proceedings of the 12th International Conference on Quality Software (QSIC), 2012”, A. Tang, H. Muccini (Hrg.); IEEE, (2012), ISBN: 978-1-4673-2857-9; S. 214 - 219.

Code reviews are an integral part of the development of a dependable system such as for airport operations. It is commonly accepted that code reviews are an effective quality assurance technique even if a rigorous application is also a high cost factor. For large software systems a formal method may be inapplicable throughout the whole code base. In this study an airport operational database (AODB) is developed with the application of a more lightweight approach to code reviews. A continuous, distributed and change-based process is applied by the development team and evaluated in comparison to team walkthroughs (IEEE-1028) as a baseline method. The application showed to be highly useful, equally effective as the baseline, but more efficient especially for the preparation, execution and rework effort. The results show that continuous code reviews also support the understanding of the code base and the concept of collective ownership. Such processes may not completely substitute a more formal and effortful technique. Especially for reviewing critical design aspects or complex items a traditional approach is still more appropriate. The main outcome is that such lightweight code reviews may be used together with more formal approaches to ensure a high coverage and that the degree of formalism should be adopted to the criticality of the item under review.

M. Bernhart, A. Mauczka, M. Fiedler, S. Strobl, T. Grechenig: “Incremental Reengineering and Migration of a 40 Year Old Airport Operations System”; in: “Proceedings of the 28th IEEE International Conference on Software Maintenance (ICSM), 2012”, (2012).

This report describes the challenges and experiences with the incremental re-engineering and migration of a 40 year old airport operations system. The undocumented COBOL legacy system has to be replaced within given constraints such as limited downtime. A 3-step technical strategy is derived and successfully applied to the re-engineering task in this project. The incremental approach and resulting parallel operations of both systems are the most significant technical drivers for complexity in this environment. Furthermore, this report describes the process for planning, analyzing and designing a replacement system that is backed by strong user acceptance. The user interface design task of taking the system from VT100 to a web interface was a critical success factor, as well as live testing with actual production data and actual user interactions. Other aspects such as training and end user documentation are discussed.

S. Strobl: “Robogyan 2K12”; National Level Robotics Workshop on Image Processing Robots, Pulivendula, Andhra Pradesh, India 2012.

M. Bernhart: “Towards Differential-Based Continuous Code Reviews”; Bericht für Austrian Marshall Plan Foundation; 2012; 34 S.

This dissertation describes, formalizes and evaluates the latest development toward continuous code review processes that are now part of modern software engineering processes such as in popular open source projects. In many software assurance techniques, there is a shift towards continuous processes that provide better scalability compared to traditional approaches. A continuous and differential code review process based on changeset-reviews (CBR) and task-reviews (TBR) is described and is the foundation for a tool development. The resulting Mylyn Reviews project now provides the core components to build code review tools for Eclipse e.g. the Gerrit connector. This bridges the gap between the review infrastructure and the development environment and is now the defacto standard for code reviews in Eclipse projects. The proposed processes and the corresponding tools have been adopted to the specific needs of software development for air traffic management (ATM) to comply with the RTCA DO-278/ED-109 requirements. Key factors are the full traceability to a specific software version, the source-code (for review coverage) and the related software requirements. To evaluate the indicated benefits from continuous code reviews, a comparative empirical study was performed in an airport operations database (AODB) software engineering project. The study concluded that continuous code reviews may be used for a high review coverage, but may be supplemented with more traditional processes for selected critical parts of the system to ensure effectiveness.

J. Grabner, A. Mauczka, M. Bernhart, T. Grechenig: “Exploiting semantic aspects to evolve a text-based search on a legacy document management system”; in: “Proceedings of the Twenty-Third International Conference on Software Engineering & Knowledge Engineering”, Knowledge Systems Institute Graduate School, Skokie, IL 60076, USA (2011), ISBN: 1891706292; S. 392 - 397.

Mario Bernhart, Stefan Reiterer, Kilian Matt, Andreas Mauczka, Thomas Grechenig: “A Task-Based Code Review Process and Tool to Comply with the DO-278/ED-109 Standard for Air Traffic Managment Software Development: An Industrial Case Study”. HASE 2011:182-187

Software reviews are one of the most efficient quality assurance techniques in software engineering. They are required for the enhancement of the software quality in early phases of the development process and often used in development of safety critical systems. In the field of software engineering for Air Traffic Management (ATM) the standard DO-278/ED-109 requires the rigorous application of code reviews and fully traceable reporting of the results. This case study presents a process and an IDE-integrated tool that complies with the requirements of the standard.

M. Reiterer, S. Strobl: “TapiJI - Internationalisierung leicht gemacht”; Eclipse-Magazin, 2 (2011), S. 62 - 66.

M. Bernhart, S. Reiterer, K. Matt, “Case Study: Mylyn Reviews for Software Development in Air Traffic Management”. EclipseCon 2011, Vortrag.

M. Bernhart, T. Artner, A. Mauczka, T. Grechenig: “Automated Integration Testing and Verification of a Secured SOA Infrastructure - an Experience Report in eHealth”; in: “Proceedings of the Twenty-Second International Conference on Software Engineering & Knowledge Engineering”, Knowledge Systems Institute Graduate School, (2010), ISBN: 978-1-891706-26-4; S. 198 - 202.

A. Mauczka, M. Bernhart, T. Grechenig: “Analyzing the Relationship of Process Metrics And Classified Changes - A Pilot Study”; in: “Proceedings of the Twenty-Second International Conference on Software Engineering & Knowledge Engineering”, Knowledge Systems Institute Graduate School, (2010), ISBN: 978-1-891706-26-4; S. 269 - 272.

A. Mauczka, C. Schanes, F. Fankhauser, M. Bernhart, T. Grechenig: “Mining security changes in freebsd”; in: “Proceedings of 7th IEEE Working Conference on Mining Software Repositories (MSR)”, IEEE, (2010), ISBN: 978-1-4244-6803-4; S. 90 - 93.

Current research on historical project data is rarely touching on the subject of security related information. Learning how security is treated in projects and which parts of a software are historically security relevant or prone to security changes can enhance the security strategy of a software project. We present a mining methodology for security related changes by modifying an existing method of software repository analysis. We use the gathered security changes to find out more about the nature of security in the FreeBSD project and we try to establish a link between the identified security changes and a tracker for security issues (security advisories). We give insights how security is presented in the FreeBSD project and show how the mined data and known security problems are connected.

S. Strobl, M. Bernhart, T. Grechenig: “An experience report on the incremental adoption and evolution of an SPL in eHealth”; in: “Proceedings of the 2010 ICSE Workshop on Product Line Approaches in Software Engineering”, Acm, New York, NY, USA (2010), ISBN: 978-1-60558-968-8; S. 16 - 23.

This work presents an experience report on the evolutionary development of a software product line (SPL) in the eHealth domain. The effort was triggered by the concurrent development of two similar products and the ambition to reduce redundant development and quality assurance. The result is a scalable base for a complex, highly adaptable information system. This system is required to be applicable in multiple business domains and diverging environments ranging from large scale hospitals to single practitioner clinics.

During this effort we were able to extract the common denominator in the form of core assets from existing applications specific to a medical field. For customisations well defined variation points were developed. Our solution allows for easy implementation of medical documentation requirements compared to tedious development of new applications from scratch. It significantly reduced the necessary development effort and time to market. The resulting core documentation platform can be used for an arbitrary medical field completely eliminating the dependence on the specific customer domain.

A. Mauczka, M. Bernhart, T. Grechenig: “Adopting Code Reviews for Agile Software Development”; in: “Proceedings of the Agile Conference”, agile, (2010), ISBN: 978-0-7695-4125-9; S. 44 - 47.

Code reviews have many benefits, most importantly to find bugs early in the development phase and to enforce coding standards. Still, it is widely accepted that formal code reviews are time-consuming and the practical applicability in agile development is controversial. This work presents a continuous differential-based method and tool for code reviews. By using a continuous approach to code reviews, the review overhead can be reduced and the effectiveness and applicability in agile environments shall be improved.

M. Bernhart, A. Mauczka, T. Grechenig: “Towards Code Reviews for Agile Software Development“. QUATIC 2010.

M. Bernhart, K. Matt: “Mylyn Reviews - Finding a new Home for ReviewClipse“. EclipseCon 2010, talk.

M. Bernhart, K. Matt: “Mylyn Reviews - Finding a new Home for ReviewClipse“. Eclipse Summit Europe 2010, talk.

M. Bernhart, A. Mauczka, C. Mayerhofer, T.Grechenig: “Review- Clipse@Class: Code Reviews in Undergraduate Software Engineering Education.“ CSEET 2010.

S. Strobl, M. Bernhart, T. Grechenig, W. Kleinert: “Digging deep: Software reengineering supported by database reverse engineering of a system with 30+ years of legacy”; in: “Software Maintenance, 2009. ICSM 2009.”, IEEE, (2009), ISBN: 978-1-4244-4897-5; S. 407 - 410.

This paper describes the industrial experience in performing database reverse engineering on a large scale software reengineering project. The project in question deals with a highly heterogeneous in-house information system (IS) that has grown and evolved in numerous steps over the past three decades. This IS consists of a large number of loosely coupled single purpose systems with a database driven COBOL application at the centre, which has been adopted and enhanced to expose some functionality over the web. The software reengineering effort that provides the context for this paper deals with unifying these components and completely migrating the IS to an up-to-date and homogeneous platform. A database reverse engineering (DRE) process was tailored to suit the project environment consisting of almost 350 tables and 5600 columns. It aims at providing the developers of the software reengineering project with the necessary information about the more than thirty year old legacy databases to successfully perform the data migration. The application of the DRE process resulted in the development of a high-level categorization of the data model, a wiki based redocumentation structure and the essential data-access statistics.

A. Mauczka, T. Grechenig, M. Bernhart: “Predicting Code Change by using static metrics”; in: “SERA 2009 Proceedings”, Conference Publishing Services, (2009).

Maintenance of software is risky, potentially expensive - and inevitable. The main objective of this study is to examine the relationship of code change, referred to as maintenance effort, with source-level software metrics. This approach varies from the typical approach of evaluating software metrics against failure data and provides a different angle on the validation of software metrics. The goal of this study is to show through exhaustive data mining that a relation between software metrics and code change exists. Once this connection is established, a set of software metrics is identified, which will be used in further studies to predict code change in problematic modules identified by the software metrics at an early development stage.

M. Bernhart, C. Mayerhofer, T. Grechenig: “ReviewClipse - Kontinuierliche Code-Reviews mit Subversion und Eclipse”; Eclipse-Magazin, 6.09 (2009), S. 37 – 38.

M. Bernhart, C. Mayerhofer, T. Grechenig: “Kontinuierliche Code-Reviews mit Subversion und Eclipse - Schulterblick”; Heise-Developer, Heise Zeitschriften Verlag, (2009), 5 S.

M. Bernhart, C. Mayerhofer, T. Grechenig: “ReviewClipse - Supporting Code-Reviews within the Eclipse IDE”; EclipseCon 2009, Santa Clara, CA, USA; 23.03.2009 - 26.03.2009.

M. Bernhart, C. Mayerhofer, T. Grechenig: “Kontinuierliche Code-Reviews mit Subversion und Eclipse - Schulterblick”; SubConf & CMConf 2009, München; 27.10.2009 – 29.10.2009

M. Bernhart, C. Mayerhofer, T. Grechenig: “ReviewClipse - Continuous Code Reviews within the Eclipse IDE”; EclipseSummit 2009, Ludwigsburg, Deutschland; 27.10.2009 - 29.10.2009.

M. Bernhart, C. Mayerhofer, T. Grechenig: “ReviewClipse - Continuous Code Reviews within the Eclipse IDE”; Eclipse DemoCamp Vienna 2009, Wien, Österreich; 30.11.2009.

T. Wild, T. Hölzenbein, T. Grechenig, M. Bernhart, A. Binder, B. Horn, S. Strobl, J. Unosson, M. Prinz, A. Wujciow: “Digitale Wunddiagnostik und -dokumentation mit W.H.A.T. als Basis für eine integrative Versorgung”; Wundmanagement, 06 (2009).

Bernhart, M., Grechenig, T., Hetzl, J. & Zuser, W. (2006). “Dimensions of Software Engineering Course Design”. In Proceedings of the 28th International Conference on Software Engineering - ICSE 2006. Shanghai, China, 20–28 May 2006, ACM Press, pp 667 – 672, ISBN:1-59593-375-1

A vast variety of topics relate to the field of Software Engineering. Some universities implement curricula covering all aspects of Software Engineering. A number of other courses cover detailed aspects, e.g. programming, usability and security issues, analysis, architecture, design, and quality. Other universities offer general curricula considering Software Engineering in few or single course only. In each case, a course set has to be defined which directly relates to a specific student outcome. This work provides a method for categorizing and analyzing a course set within abstract dimensions for course design. We subsequently show the results of applying the dimensions to the course degree scheme in use. The course design dimensions can also be related to the student outcomes defined in SE2004 CC Section 3.2 [10].

Back to top