Below are some of the tips I picked up, while interacting with the key members on the team.
The Big Picture
The overall release is divided into several phases as mentinoned below.
Notice the SIT and UAT are run independently, towards end of release.
This is time consumimg but does reduce risk of any bugs in release.
This is more relevant for softwares which needs extremely thorough testing.
- Pre foundation Phase (1 week)
- Foundation Phase (2 weeks)
- Sprint 1 (2 weeks)
- Sprint 2
- Sprint 3
- Sprint 4
- Sprint 5 Final / Hardening Sprint
- SIT Test (1 month)
- UAT Test (1 month)
- Release
During foundation phase high level estimation done using technique called T-Shirt sizing
(S, M, L, XL, XXL, XXXL). This helps in deciding scope of sprints.
Plan for sprints to progessively gain velocity
Balancing RnD and delivery
Instead of running 2 week Spikes like Sprints, only Spike Tasks are run to balance RnD efforts with delivering usable functionality.
1 Story = Spike Task1 + Task2 + Task3....
Sprint includes a sensible mix of techincal stories and functional stories.
Stubs are used for technical components planned for future technical stories.
Estimation
Stories estimated as story points using estimation poker
Task are always estimated / tracked at hours spent.
There is provision in tool, to estimate task hours and also put in actual hours.
Over several successive sprints, a good average estimate of 1 Story point = xyz hours crystalizes
Sample Story Status:
New
Analyzed
Approved
Dev WIP
Dev Blocked
Dev Done
Test WIP
Test Blocked
Test Done
Dev Done includes code review
Test Done includes show-n-tell, given by tester to convince the BA
Very succint definition of done
Sample DOD for dev complete:
- Impact Analysis document done
- Code complete
- Code reviewed
- NFR, performance, security tests pass
- Regression tests pass
- All known bugs raised by test team resolved
About Code and CI
Release-wise branch in SVN (version control)
No branches per sprint
No parallel sprints
SVN checkin format:
<Release>_<Module>_<Story>_<Task/Defect>: Description
Automated regression testing in place
Other Takeaways:
1.
Very detailed acceptance criteria, Yes No questions, fill blanks, unambigious answers
Is xxx panel visible as seen in logical diagram? Yes/No
Does the table/grid contain 13 rows Yes/No
Quality of story determined by how detailed is acceptance criteria
2.
Story Status like "Test Blocked", if tester cannot test a story, calls up dev immediately or writes email.
All Blockers get mentioned in standup
3.
Always testing team is lightly loaded at start of sprint and overloaded towards end of sprint, to reduce this pain point ....
Have stories at very fine-grained level, eg each story should add up to only few hours of dev task.
This way, dev always has something testable for tester throughout the sprint Idle time for testers (waiting for testable functionality) is reduced.
No comments:
Post a Comment