A view of an object that focuses on the information relevant to a particular purpose and ignores the remainder of the information [IEEE Std 610.12-1990]
A description of something that omits some details that are not relevant to the purpose of the abstraction. It is the converse of refinement [D'Souza&Wills 1999]
The act or process of leaving out of consideration one or more properties of a complex object so as to attend to others.Abstraction in programming is the process of identifying common patterns that have systematic variations; an abstraction represents the common pattern and provides a means for specifying which variation to use. [Gabriel 1996]
abstract syntax tree
Compilers often construct an abstract syntax tree (AST) for the semantic analysis. Its nodes are programming language constructs and its edges express the hierarchical relation between these constructs. From [Koschke 1998]: ``The structure of an AST is a simplification of the underlying grammar of the programming language, e.g., by generalization or by suppressing chain rules. (...) This structure can be generalized so that it can be used to represent programs of different languages.''
Adaptability concerns the alteration of a system to fit the needs of a user without necessarily changing it from one machine to another. [NATO 1970]
Modification of a software product performed after delivery to keep a computer program usable in a changed or changing environment [IEEE Std 1219-1998]
See also: software maintenance
Aspect definitions consist of pointcuts and advices. Advices are the code that crosscuts the dominant decomposition of a software system.
agile software development
According to Scott W. Ambler, respected authority in the agile methods community, agile software development is an iterative and incremental (evolutionary) approach to software development which is performed in a highly collaborative manner with "just enough" ceremony that produces high quality software which meets the changing needs of its stakeholders. Agile methods refer to a collection of "ligthweight" software development methodologies that are basically aimed at minimising risk and achieving customer satisfaction through a short feedback loop.
alternative hypothesis (H1)
The hypothesis that remains tenable when the null hypothesis is rejected [ISERN]
The alternative hypothesis posits that there is no significant difference between two treatments (that is, between two methods, tools, techniques, environments or other conditions whose effects you are measuring) with respect to the dependent variable you are measuring (such as productivity, quality or cost) [Fenton&Pfleeger 1996].
See also: null hypothesis, statistical hypothesis.
The phase in the software life-cycle that emphasises on investigation of what the problem is rather than how a solution must be defined.
Term introduced by Lehman and Belady to describe the work done to decrease the complexity of a program without altering the functionality of the system as perceived by users. Anti-regressive work includes activities such as code rewriting, refactoring, reengineering, restructuring, redocumenting, and so on.
The organizational structure of a system or component [IEEE Std 610.12-1990]
The fundamental organisation of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution. [IEEE Std 1471-2000]
The architecture of a software system is the structural and behavioural framework on which all other aspects of the system depend. It is the organisational structure of a software system including components, connections, constraints, and rationale.
A software system's architecture is the set of principal design decisions about the system. (Richard Taylor)
David Garlan states that an architectural style "defines constraints on the form and structure of a family of architectural instances''.
(1) The process of defining a collection
of hardware and software components and their interfaces to
establish the framework for the development of a computer
(2) The result of the process in (1). [IEEE Std 610.12-1990]
That part of the design phase where the software architecture is defined.
A collection of products to document an architecture. [IEEE Std 1471-2000]
The result of any activity in the software life-cycle such as requirements, architecture model, design specifications, source code and test scripts
A piece of information that is used or produced by a software development process. An artifact can be a model, a description, or software.
A collection of artefacts.
A modular unit designed to implement a (crosscutting) concern. In other words, an aspect provides a solution for abstracting code that would otherwise be spread throughout (i.e., cross-cut) the entire program. Aspects are composed of pointcuts and advices.
The activity of locating opportunities for introducing aspects in non aspect-oriented software. A distinction can be made between manual exploration supported by special-purpose browsers and source-code navigation tools, and aspect mining techniques that try to automate this process of aspect discovery and propose the user one or more aspect candidates.
The activity that turns potential aspects into actual aspects in some aspect-oriented language, after a set of potential aspects have been identified in the aspect exploration phase.
The process of progressively modifying the elements of an aspect-oriented software system in order to improve or maintain its quality over time, under changing contexts and requirements.
The process of migrating a software system that is written in a non aspect-oriented way into an aspect-oriented equivalent of that system.
The activity of semi-automatically discovering those crosscutting concerns that potentially could be turned into aspects, from the source code and/or run-time behaviour of a software system.
aspect-oriented software development (AOSD)
A new approach to software development that addresses limitations inherent in other approaches, including object-oriented programming. AOSD aims to address crosscutting concerns by providing means for systematic identification, separation, representation and composition. Crosscutting concerns are encapsulated in separate modules, known as aspects, so that localization can be promoted. This results in better support for modularization hence reducing development, maintenance and evolution costs.
The process of composing the core functionality of a software system with the aspects that are defined on top of it, thereby yielding a working system.
According to Kent Beck [Fowler1999] a bad smell is a structure in the code that suggests, and sometimes even scream for, opportunities for refactoring.
(1) A standard against which measurements
or comparisons can be made.
(2) A problem, procedure, or test that can be used to compare systems or components to each other or to a standard as in (1).
(3) A recovery file.
[IEEE Std 610.12-1990]
A benchmark is a set of tests used to compare the performance of alternative tools, methods, or techniques.
A kind of reuse where a component is reused ìas isî (i.e. without changing anything to the component). Paul Bassett argues that this is not a kind of reuse, but simply a use of the component!
A model of real-world objects and their interactions -or rather, some users' understanding of them [D'Souza&Wills 1999]
A step or set of steps in a process or procedure or guide (algorithmic or heuristic) used by a customer for doing its business, work, or function, and often embodied in whole or in part in the software of a system [Chapin et al. 2001]
Capability Maturity Model (CMM)
Defined by the Software Engineering Institute (SEI) at Carnegie Mellon University. Describes the level of capability and maturity a software team could aim for and could be assessed against.
A case study is a research technique where you identify key factors that may affect the outcome of an activity and then document the activity: its inputs, constraints, resources, and outputs. Case studies usually look at a typical project, rather than trying to capture information about all possible cases; these can be thought of a "research in the typical". Formal experiments, case studies and surveys are three key components of empirical investigation in software engineering. [Fenton&Pfleeger 1996]
The term case study is also often used in an engineering sense of the word. Testing a given technique or tool on a representative case against a predefined list of criteria and reporting about the lessons learned.
A software tool that helps software designers and developers specify, generate and maintain some or all of the software components of an application. Many popular CASE tools provide functions to allow developers to draw database schemas and to generate the corresponding code in a data description language (DDL). Other CASE tools support the analysis and design phases of software development, for example by allowing the software developer to draw different types of UML diagrams.
Record with some of the information related to one or several amendments (i.e., changes) made to the code or to another software artefact. The record generally includes the responsible, the date and some explanation (e.g., reasons for which a change was made).
Occurs when making a change to one part of
a software system requires other system parts that depend on it
to be changed as well. These dependent system parts can on their
turn require changes in other system parts.
In this way, a single change to one system part may lead to a propagation of changes to be made throughout the entire software system.
(French) A pattern describing salient features of a concept that supports recognition of that concept in some specified context by application of some specified comparison algorithm.
A collection of related features or characteristics that provide a shared technical vocabulary, including inter-feature relationships. (Source: Programmer's Apprentice, 1990)
client interface of a class
The set of all methods exported by the class (= public methods)
The client interface is used to access the functionality of objects. The interface is accessed by sending messages to objects, and the internal structure of the objects shouldn't be evident. [Lamping 1993]
See also: specialisation interface
Clones are segments of code that are similar according to some definition of similarity. (Ira Baxter, 2002)
A software clone is a special kind of software duplicate. It is a piece of software (e.g., a code fragment) that has been obtained by cloning (i.e., duplicating via the copy-and-paste mechanism) another piece of software and perhaps making some additional changes to it. This primitive kind of software reuse is more harmful than it is beneficial. It actually makes the activities of debugging, maintenance and evolution considerably more difficult.
The activity of locating duplicates or fragments of code with a high degree of similarity and redundancy.
The set of features or properties of a component (or system) that are the same, or common, between systems
The ease of combining software elements with others [Meyer 1997]
The ability of two or more systems or components to perform their required functions while sharing the same hardware or software environment [IEEE Std 610.12-1990]
The ability of two or more systems or components to exchange information [IEEE Std 610.12-1990]
The degree to which a system or component has a design and implementation that is difficult to understand and verify [IEEE Std 610.12-1990]
That property of a language expression which makes it difficult to formulate its overall behaviour, even when given almost complete information about its atomic components and their inter-relations. [Edmonds 1997]
A component is a high-quality workproduct, designed, documented, and packaged to be reusable. A component is cohesive and has a stable interface [Jacobson et al. 1997].
A component is a physical and replaceable part of a system that conforms to and provides the realisation of a set of interfaces (Grady Booch).
A component is a self-contained piece of software with clearly-defined interfaces and explicitly-declared context dependencies [Stahl&Volter 2006].
Mary Shaw and David Garlan define software components as "the loci of computation and state. Each component has an interface specification that defines its properties, which include the signatures and functionality of its resources together with global relations, performance properties, and so on. (...)''
Compression is the characteristic of a piece of text that the meaning of any part of it is "larger" than that particular piece has by itself. This characteristic is created by a rich context, with each part of the text drawing on that context - each word draws part of its meaning from its surroundings. [Gabriel 1996]
Those interests which pertain to the system development, its operation or any other aspects that are critical or otherwise important to one or more stakeholders. Concerns can be logical or physiscal concepts, but they may also include system considerations such as performance, reliability, security, distribution, and evolvability. [IEEE Std 1471-2000]
(From aosd.net) A concern is an area of interest or focus in a system. Concerns are the primary criteria for decomposing software into smaller, more manageable and comprehensible parts that have meaning to a software engineer (see separation of concerns). Examples of concerns include requirements, use cases, features, data structures, quality-of-service issues, variants, intellectual property boundaries, collaborations, patterns and contracts. There are many formulations used to capture concerns as well-identified separate units, aspects are one such mechanism, that are tuned to capturing crosscutting concerns.
A range of values that, considering all possible samples, has some designated probability of including the true population value [ISERN]
One behavioural description conforms to another if (and only if) any object hat behavs as described by one also behaves as described by the other (given a mapping between the two descriptions). A conformance is a relationship between the two descriptions, accompanied by a justification that includes the mapping between them and the rationale for the choices made. Refinement and conformance form the basis of traceability and document the answer to the "why" question: Why is this design done in this way? [D'Souza&Wills 1999]
Mary Shaw and David Garlan state that connectors are "the loci of relations among components. They mediate interactions but are not things to be hooked up (rather, they do the hooking up). Each connector has a protocol specification that defines its properties, which include rules about the types of interfaces it is able to mediate for, assurances about properties of the interaction, rules about the order in which things happen, and commitments about the interaction (...).''
The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a system or component. [IEEE Std 610.12-1990]
Consistency is the absence of inconsistencies in or between software artefacts.
copy-and-modify reuse (or copy-and-edit reuse)
A widespread (but also very vicious) form of reuse where the reuser takes a copy of a component and starts modifying it without maintaining any form of consistency with the original component. In other words, there is no systematic way of keeping the two versions synchronised. The danger is of course a proliferation of versions, and an inability to upgrade to new versions of a component. This form of reuse only brings benefits on a very short-term basis.
Reactive modification of a software product performed after delivery to correct discovered faults [IEEE Std 1219-1998]
The ability of software products to perform their exact tasks, as defined by their specification [Meyer 1997]
Concerns that do not fit within the dominant decomposition of a given software system, and as such have an implementation that cuts across that decomposition.Aspect-oriented programming is intended to be a solution to modularise such crosscutting concerns.
Decay is the antithesis of evolution. While the evolution process involves progressive changes, the changes are degenerative in the case of decay.
Involves examining dependency relationships among software artefacts of the same kind, usually program entities (i.e. dependencies at the implementation level). This definition is more or less the same as the one of vertical traceability.
A graph in which the nodes represent software artefacts, and the edges represent all different kinds of dependency relationships between these artefacts
The phase in the software life-cycle that emphasises a logical solution, i.e. how the system fulfills the requirements. During object-oriented design, there is an emphasis on defining logical software objects that will ultimately be implemented in an object-oriented programming language. In this view, the design serves as a high level description of the source code, describing its key features and giving a blueprint of how the code is organised.
(1) The process of defining the
architecture, components, interfaces, and other characteristics
of a system or component.
(2) The result of the process in (1).
[IEEE Std 610.12-1990]
A subset of reverse engineering in which domain knowledge, external information, and deduction or fuzzy reasoning are added to the observations of the subject system to identify meaningful higher-level abstractions beyond those obtained directly by examining the system itself. [Chikofsky&Cross 1990]
Design recovery recreates design abstractions from a combination of code, existing design documentation (if available), personal experience, and general knowledge about problem and application domains. Design recovery must reproduce all of the information required for a person to fully understand what a program does, how it does it, why it does it, and so forth. Thus, it deals with a far wider range of information than found in conventional software engineering representations or code. [Biggerstaff 1989]
(1) The process of refining and expanding
the preliminary design of a system or component to the extent
that the design is sufficiently complete to be implemented.
(2) The result of the process in (1).
[IEEE Std 610.12-1990]
A form of reuse where a degree of consistency is maintained between the original component and its reusers. With disciplined reuse the benefits of reuse are much more sustainable.
A problem area. Typically, many application programs exist to solve the problems in a single domain. The following prerequisites indicate the presence of a domain: the existence of comprehensive relationships among objects in the domain, a community interested in solutions to the problems in the domain, a recognition that software solutions are appropriate to the problems in the domain, and a store of knowledge or collected wisdom to address the problems in the domain. Once recognized, a domain can be characterized by its vocabulary, common assumptions, architectural approach, and literature. [Arango Prieto-Diaz 1991]
An area of knowledge or activity characterized by a set of concepts and terminology understood by practitioners in that area [Booch et al 1990]
The process of identifying, capturing and organizing domain knowledge about the problem domain with the purpose of making it reusable when creating new systems. [Arrango 1994]
The part of domain engineering that deals with identifying commonalities, similarities and variabilities of an application or an application domain [Jacobson et al. 1997].
A systematic way of defining, implementing and evolving a domain in terms of commonalities and variabilities
Domain scoping identifies the domains of interest, the stakeholders, and their goals, and defines the scope of the domain. [Arrango 1994]
Domain modeling is the activity for representing the domain, or the domain model. Typically a domain model is formed through a commonality and variability analysis to concepts in the domain. [Arrango 1994]
The dominant decomposition is the principle decomposition of a program into separate modules. The tyranny of the dominant decomposition [TarrEtAl1999] refers to restrictions imposed by the dominant decomposition on a software engineer's ability to represent particular concerns in a modular way. Many kinds of concerns do not align with the chosen decomposition, so that the concerns end up scattered across many modules and tangled with one another.
A software duplicate is a code fragment that is redundant to another
code fragment; often due to copy and paste. A negative consequence of duplication is that
if one fragment is changed, each duplicate may need to be adjusted, too.
Note that a the term software duplicate is preferred over software clone. In English, clone suggests that one fragment is derived/copied from the other one. However, this is just one special type of software redundancy. Code fragments could also be similar by accident.
One of the three types of software described by Lehman in his SPE program classification [LehmanBelady1985]. The distinctive properties of E-type systems are: the problem that they address cannot be formally and completely specified; the program has an imperfect model of the operational domain embedded in it; the program reflects an unbounded number of assumptions about the real world; the installation of the program changes the operation domain; the process of developing and evolving E-type system is driven by feedback.
ease of use
The ease with which people of various backgrounds and qualifications can learn to use software products and apply them to solve problems. It also covers the ease of installation, operation and monitoring.
Economy, seen as a software quality, is the ability of a system to be completed on or below its assigned budget [Meyer 1997]
The ability of a software system to place as few demands as possible on hardware resources, such as processor time, space occupied in internal and external memories, bandwidth used in communication devices [Meyer 1997]
The degree to which a system or component performs its designated functions with minimum consumption of resources. In case of time resources we speak of execution efficiency. In case of available storage resources we speak of storage efficiency [IEEE Std 610.12-1990]
The process to estimate in advance the effort (or time) required to make a certain software change
Unscheduled corrective maintenance performed to keep a system operational [IEEE Std 1219-1998]
The profession in which a knowledge of the mathematical and natural sciences gained by study, experience and practice is applied with judgement to develop ways to utilize, economically, the materials and forces of nature for the benefit of mankind [Accreditation Board for Engineering and Technology, 1996]
See also software engineering
See software entropy
See software evolution
evolutionary software development
This is basically the same as iterative incremental software development, but the term stresses the fact that a software system is never completely finished, and that it continues to evolve after it has been delivered.
The capability of software products to be evolved to continue to serve its customer in a cost effective way [Cook&al2000]
In general, an experiment is defined as an act or operation for the purpose of discovering something unknown or testing a principle, supposition, etc. In software engineering: a trial that is conducted in order to verify a hypothesis defined beforehand in a controlled setting in which the most critical factors can be controlled or monitored [ISERN]
A formal experiment is a rigorous, controlled investigation of an activity, where key factors are identified and manipulated to document their effects on the outcome. By their nature, since formal experiments require a great deal of control, they tend to be small, involving small numbers of people or events. We can think of experiments as "research in the small". Formal experiments, case studies and surveys are three key components of empirical investigation in software engineering. [Fenton&Pfleeger 1996]
Measure that includes all uncontrolled sources of variation affecting a particular score [ISERN]
The ease with which a system or component can be modified to increase its storage or functional capacity [IEEE Std 610.12-1990]
The ease of adapting software products to changes of specification [Meyer1997].
Information considered to be objectively real because it was obtained through observation [ISERN]
A feature describes prominent or distinctive user-visible aspects, quality or characteristics of a software system or systems [Kang et al. 1990]
A feature is an observable and relatively closed behaviour or characteristic of a (software) part [Pulvermuller&al2001].
The ease with which a system or component can be modified for use in applications or environments other than those for which it was specifically designed [IEEE Std 610.12-1990]
A formal method of software development is a process for developing software that exploits the power of mathematical notation and mathematical proofs [Wordsworth 1999]
The traditional process of moving from high-level abstractions and logical, implementation-independent designs to the physical implementation of a system
Forward engineering is the traditional process of moving from high-level abstractions and logical, implementation-independent designs to the physical implementation of a system. Forward engineering follows a sequence of going from requirements through designing its implementation. [Chikofsky&Cross 1990]
fragile base class problem
Refers to the problem that occurs when independently developed subclasses are broken when their base class evolves.
A framework is a reusable design of all or part of a software system described by a set of abstract classes and the way instances of those classes collaborate[Roberts&Johnson 1996].
A framework is a micro-architecture that provides an extensible template for applications within a specific domain [OMG 1997].
A framework is anything that can be adapted or extended via systematic extension or configuration [Stahl&Volter 2006].
(1) The process of defining the working
relationships among the components of a system.
(2) The result of the process in (1). [IEEE Std 610.12-1990]
The extent of possibilities provided by a system [Meyer 1997]
The degree to which a system or component performs a broad range of functions [IEEE Std 610.12-1990]
Software can be considered 'general' if it can be used, without change, in a variety of situations. [Parnas 1979]
The characteristic of source code that enables programmers, coders, bug-fixers and people coming to the code later in its life to understand its construction and intentions, and to change it comfortably and confidently. [Gabriel 1996]
See also: maintainability
(1) Involving or serving as an aid to learning, discovery or problem solving by experimental and especially trial-and-error methods
(2) Of or relating to exploratory problem-solving techniques that utilize self-educating techniques (as the evaluation of methods) to improve performance (e.g., a heuristic computer program)
Horizontal reuse provides generic reusable components that can support a variety of products. In other words, the components can be reused in different domains or product families.
Expresses relationships between software artefacts that reside in different phases of the software life-cycle, e.g. dependencies between a requirements specification and a design component
See also: vertical traceability
A tentative explanation that accounts for a set of facts and can be tested by further investigation; a theory [ISERN]
The hypothesis is the tentative theory of supposition that you think explains the behaviour you want to explore. Wherever possible, you should try to state your hypothesis in quantifiable terms, so that it is easy to tell whether the hypothesis is confirmed or refuted [Fenton&Pfleeger 1996].
See also: research hypothesis, statistical hypothesis.
Impact analysis tries to assess the impact of changes on the rest of the system: when a certain component changes, which system parts will be affected, and how will they be affected?
Change impact analysis is defined as "identifying the potential consequences of a change, or estimating what needs to be modified to accomplish a change". [Bohner&Arnold1996]
See also: change propagation
The phase in the software life-cycle where the actual software is implemented. The result of this phase consists of source code, together with documentation to make the code more readable.
A state in which two or more overlapping elements of different software models make assertions about aspects of the system they describe which are not jointly satisfiable [Spanoudakis&Zisman 2001].
The process by which inconsistencies between software models are handled so as to support the goals of the stakeholders concerned [Finkelstein&al 1996].
The ability of software systems to protect their various components (programs, data) against unauthorized access and modification [Meyer 1997]
Intercession is the ability of a program to modify its own execution state or to alter its own interpretation or meaning [Maes 1987].
See also reflection
Introspection is the ability of a program to observe and therefore reason about its own state [Maes 1987].
See also reflection
The time from application invocation to when execution of the program actually begins [KrintzEtAl1998].
iterative incremental software development
The process of developing a software system in small steps (increments) by iterating a number of times over the different software phases.
A statement that predicts behavior under certain defined conditions, that is based on facts, reason, and observation, and that is accepted as true. There are no established laws in software engineering [ISERN]
level of significance
Probability of rejecting the null hypothesis when it is true [ISERN]
The ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment [IEEE Std 610.12-1990]
See software maintenance
A measure provides a quantitative indication of the extent, amount, dimensions, capacity or size of some attribute of a product or process. [IEEE Std 729-1993]
The process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules [Fenton&Pfleeger 1996]
The experimental process in which, to precisely describe the entities or events in real world, numbers or other symbols are assigned to its attributes by using a given scale. The result of the measurement is called a measure. [Abreu&al 2000]
The act of determining a measure. [IEEE Std 729-1993]
A quantitative measure of the degree to which a system, component or process possesses a given attribute. [IEEE Std 729-1993]
See also software metric
A mixin is a subclass definition that may be applied to different superclasses to create a related family of modified classes [Bracha&Cook 1990, page 303]
A mixin is a free-standing class extension function that abstracts over its own superclass [Simons 1995]
A model is a simplified representation of a system or phenomenon with any hypotheses required to describe the system or explain the phenomenon, often mathematically. It is an abstraction of reality emphasizing those aspects that are of interest to someone [ISERN]
A model is a coherent set of formal elements describing something (e.g., a system, bank, phone or a train) built for some purpose that is amenable to a particular form of analysis, such as: communication of ideas between people and machines, completeness checking, race condition analysis, test case generation, viability in terms of indicators such as cost and estimation, standards, transformation into an implementation. [MCF2003]
Modeling, in the broadest sense, is the cost-effective use of something in place of something else for some cognitive purpose. It allows us to use something that is simpler, safer, or cheaper than reality instead of reality for some purpose.
A model represents reality for the given purpose; the model is an abstraction of reality in the sense that it cannot represent all aspects or reality. This allows us to deal with the world in a simplified manner, avoiding the complexity, danger and irreversibility of reality.
model-driven architecture (MDA)
model-driven development (MDD)
See model-driven engineering
model-driven engineering (MDE)
Model-driven engineering is a software engineering approach that promotes the use of models and transformations as primary artifacts throughout the software development process. Its goal is to tackle the problem of developing, maintaining and evolving complex software systems by raising the level of abstraction from source code to models. As such, model-driven engineering promises reuse at the domain level, increasing the overall software quality.
Model refactoring is the equivalent of program refactoring, but applied to models instead of programs (i.e. source code). It is a specific kind of model transformation that intends to improve the structure of a model while preserving its behaviour.
The opposite of copy-and-modify reuse. Instead of copying a component and making changes to the copy, the original component is modified directly. Since this has an impact on all the other components that make use of it, we might need to make modifications to these components as well. This often leads to a propagation of changes throughout the entire software system.
null hypothesis (H0)
A statement concerning one or more parameters that is subjected to statistical test [ISERN].
The hypothesis that there is no significant difference between two treatments (that is, between two methods, tools, techniques, environments, or other conditions whose effects you are measuring (such as productivity, quality, or cost). The null hypothesis is assumed to be true unless the data indicates otherwise [Fenton&Pfleeger 1996].
See also: alternative hypothesis, statistical hypothesis.
A software development technique in which a system or component is expressed in terms of objects and connections between those objects [IEEE Std 610.12-1990]
A programming language that allows the user to express a program in terms of objects and messages between those objects [IEEE Std 610.12-1990]
A discrete instance of the phenomena being studied, e.g. a specific software module, a specific code review, an individual programmer [ISERN]
A point of view in which some principles, approaches, concepts, and even theories, have been stated uniformly. A set of assumptions about reality that, when applied to a particular situation, can be used as a guide for action. For example, the Quality Improvement Paradigm. [ISERN]
A philosophical and theoretical framework of a scientific school or discipline within which theories, laws, and generalizations and the experiments performed in support of them are formulated. [Merriam-Webster's dictionary, 2002]
A standard (object-oriented) design for addressing frequently occuring problems, described in a standard way [Gamma et al. 1994]
Modification of a software product after delivery to improve performance or maintainability [IEEE Std 1219-1998]
See also: software maintenance
The process of design and implementation in which software is embellished, modified, reduced, enlarged, and improved through a process of repair rather than replacement. [Gabriel 1996]
A set of subsystems and technologies that provide a coherent set of functionality through interfaces and specified usage patterns, which any application supported by that platform can use without concern for the details of how the functionality provided by the platform is implemented [Kleppe et al. 2003]
Provides a set of technical concepts, representing the different kinds of parts that make up a platform and the services provided by that platform. [Kleppe et al. 2003]
All observations of the phenomena being studied, e.g. all software modules, all code reviews, all programmers [ISERN]
The ease of transferring software products to various hardware and software environments [Meyer 1997]
Portability is the property of a system which permits it to be mapped from one environment to a different environment[NATO 1970]
power of test
Probability of rejecting the null hypothesis when the alternative hypothesis is true [ISERN]
(1) The process of analyzing design
alternatives and defining the architecture, components,
interfaces, and timing and sizing estimates for a system or
(2) The result of the process in (1).
[IEEE Std 610.12-1990]
Maintenance performed for the purpose of preventing problems before they occur.
See also: software maintenance
See product line
A collection of existing and potential products that address a coherent business area and share a set of similar characteristics. All these products are made by the same process and for the same purpose, and differ only in style, model or size.
See also: product line
product-line based reuse
A kind of reuse that exploits the commonalities in a product line (or product family) and established the bounds of variability among the products, making it possible to develop common assets and streamlining the development process.
See refinement [D'Souza&Wills 1999]
Redocumentation is the creation or revision of a semantically equivalent representation within the same relative abstraction level. The resulting forms of representation are usually considered alternative views (for example, dataflow, data structure, and control flow) intended for human audience. Redocumentation is the simplest and oldest form of reverse engineering, and many consider it to be an unintrusive, weak form of restructuring. The "re-" prefix implies that the intent is to recover documentation about the subject system that existed or should have existed. [Chikofsky&Cross 1990]
Reengineering, also known as both renovation and reclamation, is the examination and alteration of a subject system to reconstitute it in a new form and the subsequent implementation of the new form. Reengineering generally includes some form of reverse engineering (to achieve a more abstract description) followed by some form of forward engineering or restructuring. This may include modifications with respect to new requirements not met by the original system. [Chikofsky&Cross 1990]
A system-changing activity that results in creating a new system that either retains or does not retain the individuality of the initial system. [IEEE 1998]
[noun]A change made to the internal
structure of software to make it easier to understand and cheaper
to modify without changing its observable behaviour.
[verb]To restructure software by applying a series of refactorings without changing its observable behaviour. [Fowler 1999]
See also: restructuring
A refinement is a detailed description that conforms to another (its abstraction). Everything said about the abstraction holds, perhaps in a somewhat different form, in the refinement [D'Souza&Wills 1999]
Reflection is the ability of a program to manipulate as data, something representing the state of the program during its own execution. There are two aspects of such manipulation: introspection and intercession. Both aspects require a mechanism for encoding execution state as data; providing such an encoding is called reification [Maes 1987].
Software reliability is the probability of a failure-free operation of a computer program in a specified environment for a specified time [Musa et al. 1987].
Repairability is the ability to facilitate the repair of defects [Meyer 1997].
The collection of two or more observations under a set of identical experimental conditions [ISERN]
Repetition of the basic experiment under identical experiments, rather than repeating measurements on the same experimental unit [Fenton&Pfleeger 1996].
a statement about what the proposed system will do that all stakeholders agree must be made true in order for the customer's problem to be adequately solved [LehtbridgeLaganiere2001]
The phase in the software life-cycle where is defined what the system should do, i.e., what are the (functional and non-functional) requirements?
A software requirements specification is traceable if (1) the origin of each of its requirements is clear and if (2) it facilitates the referencing of each requirement in future development or enhancement documentation [IEEE 1993]
A tentative theory or supposition provisionally adopted to account for certain facts and to guide in the investigation of others [ISERN]
Restructuring is the transformation from one representation form to another at the same relative abstraction level, while preserving the subject system's external behaviour (functionality and semantics). A restructuring transformation is often one of appearance, such as altering code to improve its structure in the traditional sense of structured design. While restructuring creates new versions that implement or propose change to the subject system, it does not normally involve modifications because of new requirements. However, it may lead to better observations of the subject system that suggest changes that would improve aspects of the system. Restructuring is often used as a form of preventive maintenance to improve the physical state of the subject system with respect to some preferred standard. It may also involve adjusting the subject system to meet new environmental constraints that do not involve reassessment at higher abstraction levels. [Chikofsky&Cross 1990]
The ability of software elements to serve for the construction of many different applications [Meyer 1997]
The degree to which a software module or other work product can be used in more than one computer program or software system [IEEE Std 610.12-1990]
A reusable asset is a tangible resource that is acquired or developed for the solution of multiple problems, such as specifications, designs, code, test cases, etc.
The process of adapting generalised components to various contexts of use [Bassett 1997]
Repeated use of an artifact, typically outside the original context in which the artifact was created [Jacobson et al. 1997]
The range of expected results in reuse effectiveness, proficiency, and efficiency that an organisation is able to achieve through its process.
The process of developing a set of specifications for a complex hardware system by an orderly examination of specimens of that system.[Rekoff 1985]
The process of analysing an existing system to identify its components and their interrelationships and create representations of the system in another form or at a higher level of abstraction. Reverse engineering is usually undertaken in order to redesign the system for better maintainability or to produce a copy of a system without access to the design from which it was originally produced. For example, one might take the executable code of a computer program, run it to study how it behaved with different input and then attempt to write a program oneself which behaved identially (or better). An integrated circuit might also be reverse engineered by an unscrupulous company wishing to make unlicensed copies of a popular chip. (1995-10-06)
The process of extracting software system information (including documentation) from source code [IEEE Std 1219-1998]
Reverse engineering is the process of analyzing a subject system to: identify the system's components and their interrelationships and; create representations of the system in another form or at a higher level of abstraction. Reverse engineering generally involves extracting design artifacts and building or synthesizing abstractions that are less implementation-dependent. Reverse engineering in and of itself does not involve changing the subject system or creating a new system based on the reverse-engineered subject system. It is a process of examination, not a process of change or replication. [Chikofsky&Cross 1990]
The phenomenon where a change in one piece of a software system affects at least one other area of the same software system (either directly or indirectly)
See also: change propagation
The ability of software systems to react appropriately to abnormal conditions [Meyer 1997]
The seamless integration between design and source code, between modeling and implementation. With round-trip engineering a programmer generates code from a design, changes that code in a separate development environment, and recreates the adapted design diagram back from the source code[Demeyer&al 1999]
An iteration between modelling, generating code, changing that code and mapping this code back to the original model [Demeyer&al 2000]
A subset of a population [ISERN]
separation of concerns (SOC)
Separation of concerns is closely related to the well-known Roman principle of "divide and conquer". It simply means that a large problem is easier to manage if it can be broken down into pieces; particularly so if the solutions to the sub-problems can be combined to form a solution to the large problem. Separation of concerns can be supported in many ways: by process, by notation, by organization, by language mechanism and others.
Services, as the first-class citizens of SOAs, are autonomous, platform-independent computational elements that can be described, published, discovered, orchestrated and programmed using standard protocols for the purpose of building networks of collaborating applications within an across organisational boundaries.
service-oriented architecture (SOA)
According to Thomas Erl [Erl2005], SOA is "a model in which automation logic is decomposed into smaller, distinct units of logic. Collectively, these units comprise a larger piece of business automation logic. Individually, these units can be distributed. (...) (SOA) encourages individual units of logic to exist autonomously yet not isolated from each other. Units of logic are still required to conform to a set of principles that allow them to evolve independently, while still maintaining a sufficient amount of commonality and standardization. Within SOA, these units of logic are known as services."
Some of the key principles of service-orientation are: loose coupling, service contract, autonomy, abstraction, reusability, composability, statelessness and discoverability.
The degree to which a system or component has a design and implementation that is straightforward and easy to understand [IEEE Std 610.12-1990]
Software is part of a system solution that can be encoded to execute on a computer as a set of instructions; it includes all the associated documentation necessary to understand, transform and use that solution [ISERN]
Software is the collection of computer programs, procedures, rules, and associated documentation and data [IEEE]
Software is the non-hardware part, including associated documentation, of a system being implemented or implemented in part with a computer or an embedded processor [Rosen 1992]
Relates to the problem that the quality of software decreases, and the software entropy increases, as the software evolves over time.
See software evolution
software configuration management (SCM)
The discipline of managing and controlling change in the evolution of software systems. (IEEE Standard 1042, 1987)
The establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines. [NATO1969]
The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software. [IEEE Std 610.12-1990]
The systematic activities involved in the design, implementation and testing of software to optimize its production and support. [Canadian Standards Association]
The amount of disorder in a software system
According to Lehman and Ramil (chapter 1 of [MadhavjiEtAl2006]), the term evolution reflects "a process of progressive, for example beneficial, change in the attributes of the evolving entity or that of one or more of its constituent elements. What is accepted as progressive must be determined in each context.
It is also appropriate to apply the term evolution when long-term change trends are beneficial even though isolated or short sequences of changes may appear degenerative. For example, an entity or collection of entities may be said to be evolving if their value or fitness is increasing over time. Individually or collectively they are becoming more meaningful, more complete or more adapted to a changing environment."
The application of software maintenance activities and processes that generate a new operational software version with a changed customer-experienced functionality or properties from a prior operational version together with the associated quality assurance activities and processes, and with the management of the activities and processes [Chapin et al. 2001]
The phases a software product goes through
between when it is conceived and when it is no longer available
The software life-cycle typically includes the following: requirements analysis, design, construction, testing (validation), installation, operation, maintenance, and retirement.
The development process tends to run iteratively through these phases rather than linearly; several models (spiral, waterfall, etc.) have been proposed to describe this process.
Other processes associated with a software product are: quality assurance, marketing, sales and support.
See also: software maintenance
The process of modifying a software system or component after delivery to correct faults, improve performance or other attributes, or adapt to a changed environment. [IEEE Std 610.12-1990]
Modification of a software product after
delivery to correct faults, to improve performance or other
attributes, or to adapt the product to a modified environment
[IEEE Std 1219-1998]
This definition has been extended recently in the 2006 ISO/IEC 14764 standard, a revision of the IEEE 1219 standard of 1998: Software maintenance is the totality of activities required to provide cost-effective support to a software system. Activities are performed during the pre-delivery stage as well as the post-delivery stage. [ISO/IEC 2006]
The software product undergoes modification to code and associated documentation due to a problem or the need for improvement. The objective is to modify the existing software while preserving its integrity [ISO Std 12207-1995]
The deliberate application of activities and processes, whether or not completed, to existing software that modify either the way the software directs hardware of the system, or the way the system (of which the software is a part) contributes to the business of the systemís stakeholders, together with the associated quality assurance activities and processes, and with the management of the activities and processes, and often done in the context of software evolution [Chapin et al. 2001]
See also: software evolution
A software metric is a combination from measures of attributes belonging to a software product, or to its development process, which shows quantitatively some of its characteristics. [Abreu&al 2000]
software product line
A set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment (i.e., domain) or mission and that are developed from a common set of core assets in a prescribed way (see Software Product Lines Glossary).
See also: product line
software quality assurance
A planned and systematic pattern of all actions necessary to provide adequate confidence that the item or product conforms to established technical requirements. [IEEE 610.12]
specialisation interface of a class
The specialization interface is used to extend and modify classes. It is accessed by making a subclass and adding messages and methods, refining types, or overriding methods. The last of these, overriding methods, is the operation that can modify behaviour and than can interact with the internal structure of the class. [Lamping 1993]
See also: client interface
A descriptive measure of a sample, e.g., mean [ISERN]
A statement about one or more parameters of a population. The null hypothesis and alternative hypothesis are two forms of a statistical hypothesis [ISERN]
A mathematical statement concerning the sampling distribution of a random variable that is used in evaluating the outcome of an experiment or in predicting the outcome of future replications of an experiment [ISERN].
A statistic whose purpose is to provide a test of some statistical hypothesis. Test statistics such as t and F have known sampling distributions that can be employed in determining the probability of an obtained result under the null hypothesis [ISERN].
(1) Any disciplined approach to software
design that adheres to specified rules based on principles such
as modularity, top-down design, and stepwise refinement of data,
system structures, and processing steps.
(2) The result of applying the approach in (1). [IEEE Std 610.12-1990]
A survey is a retrospective study of a situation to try to document relationships and outcomes. A survey is always done after an event has occurred. When performing a suvey, you have no control over the situation at hand. That is, because it is a retrospective study, you can record a situation and compare it with similar ones. But you cannot manipulate variables as you do with case studies and experiments. Surveys try to poll what is happening broadly over large groups of projects: "research in the large" [Fenton&Pfleeger 1996].
Reuse is not restricted to certain phases of the software life-cycle, and even not to single applications. The idea of systematic reuse is that reuse takes place continuously during the entire software development process. Consequently, systematic software reuse corresponds to the purposeful creation, management and application of reusable assets.
The ISO/IEC standard 9126 defines testability as "attributes of software that bear on the effort needed to validate the software product".
The phase in the software life-cycle that aims at uncovering defects by executing specialised test programs and test cases.
The activity of uncovering defects in an implementation by comparing its behaviour against that of its specification under a given set of runtime stimuli (the test cases or test data). [D'Souza&Wills 1999]
The ability of a software system to be released when or before its users want it [Meyer 1997]
The degree to which a relationship can be established between two or more products of the development process, especially products having a predecessor-successor or master-subordinate relationship to one another; for example, the degree to which the requirements and design of a given software component match. [IEEE Std 610.12-1990]
The degree to which each element in a software development product establishes its reason for existing; for example, the degree to which each element in a bubble chart references the requirement that it satisfies [IEEE Std 610.12-1990]
See also: requirements traceability
The same as dependency analysis, but it usually involves examining dependency relationships between software artefacts at different phases of the software life-cycle, e.g. a dependency between a requirements specification and a corresponding design component.
The automatic generation of a target model from a source model, according to a transformation definition. [Kleppe et al. 1990]
transformational software engineering
A view of software engineering through which the production and evolution of software can be modelled, and practically carried out, by a chain of transformations which preserves some essential properties of the source specifications. Program compilation, but also transforming tail recursion into an iterative pattern are popular examples. This approach is currently applied to software evolution, reverse engineering and migration. The transformational paradigm is one of the most powerful approaches to formally guarantee traceability.
a set of transformation rules that together describe how a model in the source language can be transformed into a model in the target language. [Kleppe et al. 1990]
a description of how one or more constructs in the source language can be transformed into one or more constructs in the target language. [Kleppe et al. 1990]
A fixed value (typically an upper bound or lower bound) that distinguishes normal values from abnormal metric values. Typically used when applying software metrics to detect anomalies.
Usability of a software product is the extent to which the product is convenient and practical to use [Boehm et al 1978].
Software variability refers to the ability of a software sysem or artefact to be efficiently extended, changed, customised or configured for use in a particular context [Svahnberg et al 2005].
Product line variability describes the variation (differences) between the systems that belong to a product line in terms of properties and qualities (like features that are provided or requirements that are fulfilled. [Coplien et al. 1998, Kang et al. 2002, Phol et al. 2005]
The ease of preparing acceptance procedures, especially test data, and procedures for detecting failures and tracing them to errors during the validation and operation phases [Meyer 1997]
A version is a snapshot of a certain software system at a certain point in time. Whenever a change is made to the software system, a new version is created.
The collection of all versions and their relationships.
A kind of database, file system or other kind of repository in which the version history of a software system are stored. The repository may be used to store source code, executable code, documentation or any other type of software artefact of which different versions may exist over time (or even at the same time).
Vertical reuse is the process of developing components that can only be reused in a given product family.
Expresses relationships between software artefacts in the same phase of the software life-cycle. In this sense, it is the same as dependency analysis, but not necessarily restricted to the implementation phase.
See also: horizontal traceability
A view is a representation of a whole system from the perspective of a related set of concerns. [IEEE Std 1471-2000]
A viewpoint is a specification of the conventions for constructing and using a view. [IEEE Std 1471-2000]
A software component that encapsulates a system component (a procedure, a program, a file, an API) in order to transform its interface with its environment. For instance, a wrapper associated with a legacy program can give the latter an object-oriented interface.
In a database setting, a data wrapper is a software component that encapsulates a database or a set of files in order to change its model and the API through which the data can be manipulated. For example, a data wrapper built on top of a standard file can allow application programs to access the contents of the file as if it were a relational table or a collection of XML documents.
[Abreu&al2000] Fernando Brito e Abreu: ???, 2000.
[Arrango 1994] G. Arrango. Domain Analysis Methods. In Software Reusability, pp. 17-49, Ellis-Horwood, New York, 1994.
[Bassett 1997] Paul G. Bassett: Framing Software Reuse: Lessons From the Real World. Yourdon Press Computing Series, ISBN 0-13-327859-X, Prentice Hall, 1997.
[Biggerstaff 1989] T. J. Biggerstaff: Design Recovery for Maintenance and Reuse. IEEE Computer, July 1989, pp. 36-49.
[Boehm et al 1978] B.W. Boehm and J. R. Brown and J.R. Kaspar et al. Characteristics of software quality. TRW Series of Software Technology, Amsterdam, North Holland, 1978.
[Bohner&Arnold1996] S.A. Bohner, R.S. Arnold. Software Change Impact Analysis. IEEE Computer Society, 1996
[Booch et al. 1990] G. Booch, J. Rumbaugh, I. Jacobson. The Unified Modeling Language User Guide. Addison-Wesley, 1999.
[Bracha&Cook 1990] Gilad Bracha, William Cook: Mixin-based inheritance. Proc. ECOOP/OOPSLA, 1990.
[Chapin et al. 2001] N. Chapin, J.E. Hale, K.Md. Khan, J.F. Ramil, W.-G. Tan. Types of software evolution and software maintenance. Journal of Software Maintenance and Evolution: Research and Practice, 13: 3-30, 2001.
[Chikofsky&Cross 1990] E.J. Chikofsky, J.H. Cross II. Reverse Engineering and Design Recovery: A Taxonomy. IEEE Software Engineering Journal, pp. 13-17, Jan. 1990.
[Cook et al. 2000] S. Cook, He Ji, Rachel Harrison. Software Evolution and Software Evolvability. Technical Report, University of Reading, 2000.
[Demeyer et al. 1999] S. Demeyer, S. Ducasse, S. Tichelaar. Why unified is not universal. UML shortcomings for coping with round-trip engineering. Proc. Int. Conf. UML, Springer-Verlag, 1999.
[Demeyer et al. 2000] S. Demeyer, S. Ducasse, O. Nierstrasz. Finding Refactorings via Change Metrics. Proc. Int. Conf. OOPSLA 2000, ACM Press, October 2000.
[D'Souza&Wills 1999] Desmond F. D'Souza, Alan Cameron Wills. Objects, Components and Frameworks with UML: The Catalysis Approach. ISBN 0-201-31012-0, Addison-Wesley, 1999.
[Edmonds 1997] B. Edmonds. Complexity and scientific modelling. Proc. 20th Int'l Wittgenstein Symposium, Austria, August 1997.
[Erl2005] Thomas Erl. Service-Oriented Architecture: Concepts, Technology, and Design, Prentice Hall, 2005
[Fenton&Pfleeger 1996] Norman E. Fenton, Shari Lawrence Pfleeger. Software Metrics: A Rigorous and Practical Approach. Thomson Computer Press, 1996.
[Finkelstein&al 1996] A. Finkelstein, G. Spanoudakis, D. till. Managing interference. Joint Proceedings Sigsoft '96, pp. 172-174, ACM Press, 1996.
[Fowler 1999] Martin Fowler. Refactoring: improving the design of existing programs. Addison-Wesley, 1999.
[Gabriel 1996] Richard P. Gabriel. Patterns of Software: Tales from the Software Community. ISBN 0-19-512123-6, Oxford University Press, 1996.
[Gamma&al 1994] Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides: Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1994.
[IEEE Std 729-1993] IEEE Software Engineering Standard 729-1993: Glossary of Software Engineering Terminology. IEEE Computer Society Press, 1993.
[IEEE Std 610.12-1990] IEEE Standard Glossary of Software Engineering Terminology 610.12-1990. In IEEE Standards Software Engineering, 1999 Edition, Volume One: Customer and Terminology Standards. IEEE Press, 1999.
[IEEE Std 1219-1998] IEEE Standard for Software Maintenance, IEEE Std 1219-1998. In IEEE Standards Software Engineering, 1999 Edition, Volume Two: Process Standards. IEEE Press, 1999.
[IEEE Std 1471-2000] IEEE Standards Board. Recommended Practice for Architectural Description of Software-Intensive Systems, IEEE Std 1471-2000. September 2000.
[ISERN] International Software Engineering Research Network, http://www.iese.fhg.de/ISERN/
[ISO Std 9126] International Standards Organisation. ISO 9126 Information technology: Software product evaluation: Quality characteristics and guidelines for their use, Geneva, Switzerland, 1991
[ISO Std 12207-1995] International Standards Organisation. ISO 12207 Information Technology: Software Life Cycle Processes, Geneva, Switzerland, 1995.
[Jacobson&al 1997] Ivar Jacobson, Martin Griss and Patrik Jonsson: Software Reuse: Architecture, Process and Organization for Business Success. Addison-Wesley, 1997.
[Kang et al. 1990] K. Kang, S. Cohen, J. Hess, W. Novak, S. Peterson. Feature-oriented domain analysis (FODA) feasibility study. Technical Report CMU/SEI-90-TR-021, 1990.
[Kleppe et al. 1990] Kleppe, J. Warmer, W. Bast. MDA Explained, The Model-Driven Architecture: Practice and Promise, Addison Wesley, 2003.
[Koschke 1998] Rainer Koschke, Jean-Francois Girard. An Intermediate Representation for Reverse Engineering Analyses, Proc. WCRE, pp. 241-250, IEEE Computer Society, 1998
[Krintz et al. 1998] Chandra Krintz, Brad Calder, Han Bok Lee, Benjamin G. Zorn. Overlapping Execution with Transfer Using Non-Strict Execution for Mobile Programs. Proc. Int. Conf. Architectural Support for Programming Languages and Operating Systems, ACM Press, 1998.
[Lamping 1993] John Lamping. Typing the specialization interface. Proc. OOPSLA'93, ACM SIGPLAN Notices 28(10), pp. 201-214, October 1993, ACM Press.
[LehmanBelady1985] Meir M. Lehman, L. A. Belady. Program Evolution: Processes of Software Change, Academic Press, 1985
[MadhavjiEtAl2006] Nazim H. Madhavji, Juan F. Ramil, Dewayne E. Perry. Software Evolution and Feedback: Theory and Practice, Wiley 2006
[Maes 1987] Pattie Maes. Computational Reflection. PhD Thesis, Artificial Intelligence Laboratory, Vrije Universiteit Brussel, 1987.
[Meyer 1997] Bertrand Meyer. Object-Oriented Software Construction, second edition. Prentice-Hall, 1997.
[Musa et al. 1987] J. D. Musa, A. Iannino, K. Okumoto. Engineering and Managing Software with Reliability Measures, McGraw-Hill, 1987.
[NATO 1969] P. Naur, B. Randall (Eds.). Software Engineering: A Report on a Conference Sponsored by the NATO Science Committee, NATO, 1969.
[NATO 1970] B. Randall, J. N. Buxton. Software Engineering Techniques: A Report on a Conference Sponsored by the NATO Science Committee, NATO 1970.
[OMG 1997Sem] Object Management Group: UML Semantics. OMG Document ad/97-08-04, Version 1.1, 1 September 1997.
[Parnas&1979] David Parnas. Designing software for ease of extension and contraction. IEEE Transactions on Software Engineering 5(2): 128-138, 1979.
[Pulvermuller&al2001] E. Pulvermuller, A. Speck, J. O. Coplien, M. D'Hondt, W. De Meuter. Feature Interaction in Composed Systems, 2001.
[Rekoff 1985] M.G. Rekoff Jr.: On Reverse Engineering. IEEE Trans. Systems, Man, and Cybernetics, March-April 1985, pp. 244-252.
[Roberts&Johnson 1996] Don Roberts and Ralph Johnson: Evolving Frameworks: A Pattern-language for Developing Object-oriented Frameworks. PLoP '96 Proceedings, 1996.
[Rosen 1992] S. Rosen: Encyclopedia of Computer Science. Van Nostrand Reinhold, 1992.
[ShawGarlan1996] Mary Shaw, David Garlan: . 1996
[Simons1995] A.J.H. Simons: Software Architecture --- Perspectives on an Emerging Discipline. Prentice, 1996.
[Spanoudakis&Zisman 2001] G. Spanoudakis, A. Zisman: Inconsistency management in software engineering: Survey and open research issues. In Handbook of Software Engineering and Knowledge Engineering, 1, pp. 329-380, World Scientific Publishing Co., 2001.
[Stahl&Volter 2006] T. Stahl, M. Volter: Model-Driven Software Development. Wiley, 2006.
[TarrEtAl1999] Peri Tarr and Harold Ossher and William Harrison and Stanley M. Sutton: N degrees of separation: multi-dimensional separation of concerns. Proc. ICSE 1999, pp. 107-119, IEEE Computer Society Press, 1999
[Wordsworth 1999] J. B. Wordsworth: Getting the best from formal methods. Information and Software Technology 41(14), November 1999, pp. 1027-1032.