Sometimes it is useful to have a list of different criteria a team may choose from to make their initial "DoD". You will find such a list below after a piece of theory from the Scrum Guide:
…everyone must understand what “Done” means …to ensure transparency …and is used to assess when work is complete on the product Increment.
The same definition guides the Development Team in knowing how many Product Backlog items it can select during a Sprint Planning.
If “Done” for an increment is not a convention of the development organization, the Development Team of the Scrum Team must define a definition of “Done” appropriate for the product.
As Scrum Teams mature, it is expected that their definitions of “Done” will expand to include more stringent criteria for higher quality.
Any one product or system should have a definition of “Done” that is a standard for any work done on it.
So, here are some examples of DoD criteria.
- Written by the author
- Reviewed by peer
- Merged to Main/Master
There are plenty of criteria for testing, therefore I gathered them into groups by dimension. You may take any type of testing, divide it by process steps, scope and environment, e.g., “Integration is manually test by tester on UAT environment”.
Tests by process:
Tests made by role:
- Manual DevTest by the author
- Manual DevTest by a peer
- Manual Tester/QA engineer
- PO approved
Type of testing:
- NFRs (non-functional requirements)
Tests by objects/scope:
- Unit A,B,C
- Component C,D,E
Number of defects:
- Critical, Major, High = 0
- Mediums <10
Other aspects to apply:
- Coverage >75%
- Acceptance criteria pass/OK
- By type: User, Operations/support
- By process (find above)
- Deployment and rollback plans: known/written, reviewed by Ops
- DB update scripts: written, tested
- Environment X,Y,Z ready
- Deployed to environment X,Y,Z