There are three basic styles of data models: conceptual data model, logical data model and physical data model. The conceptual data model is sometimes called the domain model and it is typically used for exploring domain concepts in an enterprise with stakeholders of the project.
The logical model is used for exploring the domain concepts as well as their relationships. This model depicts the logical entity types, typically referred to simply as entity types, the data attributes describing those entities, and the relationships between the entities.
The physical data model is used in the design of the database’s internal schema and as such, it depicts the data columns of those tables, and the relationships between the tables. This model represents the data design taking into account the facilities and constraints of any given database management system. The physical data model is often derived from the logical data model although some can reverse engineer this from any database implementation.
A detailed physical data model contains all artifacts a database requires in creating relationships between tables or achieving performance goals, such as indexes, constraint definitions, linking tables, partitioned tables or clusters. This model is also often used calculating estimates for data storage and sometimes is sometimes includes details on storage allocation for a database implementation.
The physical data model is basically the output of physical data modeling which is conceptually similar to design class modeling whose main goal is to design the internal schema of a database, depicting the data tables, the data columns of those tables, and the relationships between the tables.
In a physical data model, the tables are first identified where data will be stored in the database.
For instance, in a university database, the database may contain the Student table to store data about students. Then there may also be the Course table, Professors table, and other related table to contain related information. The tables will then be normalized.
Data normalization is the process wherein the data attributes in a data model are being organized to reduce data redundancy, increase data integrity and increase the cohesion of tables and to reduce the coupling between tables.
After the tables are being normalized, the columns will be identified. A column is the database equivalent of an attribute. Each table will have one or more columns. In our example, the university database may have columns in the Student table such as FirstName, LastName and StudentNumber columns.
The stored procedures are then being identified. Conceptually, a stored procedure is like a global method for implementing a database. An example of a stored procedure would be a code to compute student average mark, student payables or number of students enrolled and allowable in a certain course. Relationships are also identified in a physical data model.
A relationship defines how some attributes in one table relate to another attributes in another table. Relationships are very important in ensuring that there is data integrity in a database after an update, insert or deletions is being performed.
Keys are also assigned in the tables. A key is one or more data attributes which identifies a table row to make it unique and thus eliminate data redundancy and increase data integrity.
Other specifications indicated in a physical data model include the application of naming conventions and application of data model patterns.