TalksPosted by pejgan 2018-08-16 21:28:28
So, I thought I’d write up a post about the
topic of my presentation at CAST 2018: Recruiting for Potential.
First of all, this will be a long post. For
those of you that just want the slides, or want to check the slides
before/after you read this post, they can be found here, along with a few of
our cases: http://bit.ly/recruitingforpotential_all
Basically, I have grown uncomfortable with
the standard recruiting process* over the years. No, let’s be honest: I’ve
always disliked it, but I thought it was only me. The more I investigated it I
realized it was not just my total inability of reading between lines and asking
relevant follow-up questions, the whole process is biased and does nothing to
promote diversity or increase the chance of hiring the best candidate.
Some of the things I noticed are kind of
obvious, such as that confident, calm and positive people tended to have easier
interviews and more often get call-backs. Especially people a bit too senior
for the position (the safe bet) and people that were a lot like the
interviewing person. Of course, there is a big impact coming from racism,
sexism and all of the other horrible prejudice we have but there are so many
other, more subtle biases involved!
I started reading up on bias and found a
rather intimidating video speaking of how resumes, interviews, references and
all the zillion tests we use have very little actual correlation to finding the
right candidate. Our biases own us and until we acknowledge them, we won’t ever
get past them. I can’t find the original video, but this is a great short one
speaking about bias: https://www.youtube.com/watch?v=Q0ne13lv8hE
To try and get a bit fairer in my
assessment and try and find the potential great tester rather than the person
with the fanciest resume and best interviewing skill I decided to include a
practical exercise as a part of the process. This is standard when recruiting
for some roles, but I had never done it and I wanted to do it well.
One thing I didn’t want to do was an
unprepared, on-site assignment. I know a lot of people like it but I think it
(also) favours extroverts over introverts and frankly, I’m not a good enough
coach yet to make it a good experience for the candidate. So, I decided to send
out an assignment in advance and then discuss the results at the interview.
Next thing was that I wanted to look at
more than “just” what tests the candidate could come up with. So, the case
looks at multiple perspectives and parts that we think are important: analysis,
planning, execution, storytelling and reporting.
We set up an application, created a
background story to our fictional business and wrote a specification with a
number of requirements.
Iteration number one looked something like
It was an amazing experience! I found out
so much about my own collection of biases and how important (and hard!) it is
to find true passion in a person. We used this several times but as I grew more
aware of bias and diversity issues, I started to see problems with it. The main
one being that it took way too much time for the candidates. It doesn’t matter
that you can solve it in a few hours and that we specifically told them to,
people want to do a good job and spent way, way too much time. It had also never
occurred to me that people might not have the time or equipment to do the
exercise. They might not be able to afford it. My main takeaway is this: Respect
other people. Make sure you pay for their time or make it no longer than a
normal interview would be. Offer to provide whatever equipment they might need
to complete the exercise.
Or: be very aware that you are pushing some
candidates out the door.
This first case made for some amazing
discussions though, and I still use it for other purposes such as training.
We adapted it for whatever role we were
recruiting for (such as business analyst/requirement analyst, test lead etc)
but the basic format looked the same. We had some unexpected reactions, such as
candidates cancelling interviews because they thought the case did not match
the role, but all-in-all it has been a really positive experience.
The latest iterations have tried to narrow
and deepen the scope. When we were looking to recruit a tester to a team where
it was important that the person has good coding skills and a deep technical
knowledge we tried to ask a few specific questions that were meant to look for
not only if the candidate can find bugs but also
Do they do shallow or deep
Can they tell a story?
If they get the code, can they
read it? Suggest a fix?
Are they comfortable using
Can they suggest a good test
for this bug fix?
The case can be found here: http://bit.ly/TechTester
In that interview we spoke a lot about what
the candidates thought about things like automation, security testing, model
based testing, load- and performance testing and the concept of “Automate all
This is my favourite so far and a format
that I think we will stick to in future processes. Of course which questions we
ask will differ with what is important to the team then and there (and what we
see in the close future).
As CAST, the Q&A afterwards raised some
interesting ideas and among those I will try to implement at least one: Giving
the candidate the option of home- or onsite assignment. It shames me that I
never thought of this myself but that’s bias for you! I hate onsite assignments
and did not see that my preferred way might actually be a nightmare for someone
If you got this far: Thank you for your
* The standard process I am referring to
might not looke the same in your country/culture but in Sweden it looks
something like this: a recruiter or manager scans resumes. Then follows a
number of interviews, how many depends on the company, and then often some kind
of test(s). The tests might look at things like personality, motivation,
performance or even IQ. As a last step the companies often contact a few
TalksPosted by pejgan 2018-08-09 21:29:29
So, the first in a row of posts about cast is a post about the workshop I had the pleasure of doing with my friend and colleague Tomas Rosenqvist.
The title was "Speedtesting using mind maps" and consisted of 90 intense minutes of interaction with the awesome "Bling R Us" webshop that we created some time ago as a recruitment exercise (I'll get back to that in the next post...)
The purpose of the workshop is to show how using a mind map template can give you a structure to your testing but still let you move fast.
The exercise was divided into 3 parts: Planning, execution and documentation and each part consisted of a short introduction, doing the actual stuff and then a debrief.
To summarize it, for a specific session:
- Set a mission
- Figure out a good approach
- Spell out all your (relevant) assumptions explicitly
- Make a really really rough plan for your
- Go for it!
- Take loads of notes
- Clean it up and reflect on your work
- Make sure you use as positive a language as possible to maximize the impact of your report
It was… absolutely AWESOME. I loved the energy and enthusiasm in the room and my only regret was having to interrupt people due to time restraint.
Slides, template (xMind), an example of a filled out template (xMind) and a pdf with the instructions and specification is available at https://bit.ly/SpeedTesting
Thank everyone who participated and if you try it out for yourself: please send us feedback and your result so we can iterate and improve!
@LenaPejgan & @muamaidbengt (Twitter)
TalksPosted by pejgan 2018-06-22 09:27:02
So, May 16th I attended the 1-day workshop "Kravhantering på riktigt
", a new Swedish Conference about requirement work and I was fortunate enough to be allowed to present how mind mapping and Visual models can help.
I've been speaking at dev. conference and testing conferences but this was my first time speking to a room full of requirements professionals, a nice twist!
Among other things, I spoke about the fear of the things we know we don't know and how that fear can hinder us. Instead of fearing it we should embrace that awareness and be happy about all the things we actually know!
I also spoke about the big pile of unspoken or unknown requirements and how our assumptions creates a lot of problems.
My take aways from the different talks is that we all spoke about the same challenges but from different perspectives. Some of our slides were even the same principles but used in different ways. Quite interesting!
My slides, in Swedish , can be found here:
TalksPosted by pejgan 2018-03-13 19:31:15
Just came back from ukStar 2018 and I thought I'd summarize my experience while still fresh in memory.
So, first of all: I only had the change to attend 1 of 2 Days. Day 2 actually had a lot of interesting talks that I wish I could have attended but unfortunately I had to go back home.
Day 1 consisted of 2 keynotes, 1 workshop-session and a bunch of tracks.
First off: starting keynote by Christina Oharian was awesome. She is a great speaker and had a very interesting story to tell about how to grow a community. Great way to start the day! I felt very energized and got a few ideas to try out with out own competence forums.
Next off was workshops and I had chosen to attend Empowering People Through Play by Christina Oharian and Nicola Sedgwick. We got to try out a few games that can be used to get teams to talk, think about what they do or just energize people. I met my old friend the Lego Duck once again so of course I was happy :-D Biggest take-away from this was definitly when we had to draw a post card from instructions given by one in the team. Then describe the post card on a sheet of paper and then Another team had to draw out post card from our requirements.. Let's just say we did a quite poor job, even if we tried mnd maps. As I am a big fan of mind maps I felt a bit bad about the result but the biggest thing with Visual models is of course the iterative process from first rough sketch to refined model and we only had one shot. Oh well! It was great fun and I'm sure to try it out with one of my teams!
Next was a track about Quality of Things which was interesting but a bit to far away from my own work to really get everything. Was quite interesting to sew the problems that occur when you have to test things like smoke detectors!
The came Blockchain Applications And How To Test Them by Rhian Lewis, an Amazing woman I've talked to online in our Women in Testing-Group. She was so much fun to listen to! Great body languange, great energy and I had no idea block chains could be used for so many things! I actually want to try it out, even it I have no idea what for...
The middle of the day is a big blur. I had to stay out of a few tracks due to being overwhelmed with the amount of people and nerves for my own talk so after lunch I stayed and talked to Christina, Nicola and Dan Billing which was so much fun. Lovely people! Mentioning people I talked to I also have to mention Harry Girlea. He is 13(!) and already more experienced as a speaker than me. So wish I had gotten the chance to see his talk, can you believe once he is old enough to vote he will be considered a senior tester! A guy to keep an eye out for.
Ok. Next up was the fantastic Marianne Dujist with Self-Organizing the 24h GoForIT Innovation Challenge. A great inspiring talk about innovation and not keeping people back. You rock Marianne!
Then the trio of 8-minutes talks that I was part of was due and I had the privilege to finish off the talk after Kinga Witko (Make IT Accessable) and Rick Tracy (Testing Tests and where to test them). 8 minutes felt impossible when I started out. Remember, this talk is something I've done a lot in the last year but usually 30-60 minutes! I Think however I managed to strip all of the story and just keep the core. The what, how and why of Visual modelling.
After a few drinks with my lovely friends in Women in testing it was time to go back to the hotel and pack. Lovely seeing you all, can't wait for the next time.
My slides are available here:
I got a question about Writing a blog post about Visual modelling and I will try!
ConferencesPosted by pejgan 2018-02-18 20:37:49
So this weekend I attended TestSverige UnConference 2018 among with a group of other members. I've attended a few unconferences before (Agila Sverige as one example) but this was the first time with a really limited allowed number of attendees.
In the end we were 10 eager participants which might sound small but allowed for really passionate discussions (that might have spilled into dinner, after dinner and breakfast..)
First draft of schedule gave us 2 tracks with 2 sessions of 30 min between each bigger break but in the spirit of agile we decided only to plan until the first fika (coffee break) and then plan until lunch, then until next break and so on. After each session we had a quick recap from each group and then you chose your next session. Choosing which one to join was sometime painful as you might want to attend both...
Topics chosen were:
Model based testing
What does a test coach do?
The lone tester in an agile team
Test Culture vs. strategy
Drunk hamster deploy
Agent based testing
Motivation - yours and others
AI and testing
The future of TestSverige
I had brought my Moving motivators so I was glad to get a chance to discuss motivations. We each prioritized the cards and discussed each persons choices. An interesting point made was that we interpreted the terms really differently, which is the point of the generalized words and descriptions.
I encourage everyone to Think about their own motivations and be open to the fact that others probably value other things and see your movitations in another light.
Göran Bakken brought a stack of interesting books, Explore it! is one of my personal favourites
My main takeaways from the discussions are:
- Combining model-based, agent-based and AI/Machine Learning is a really interesting possible future that should be investigated.
- Given the right people, environments and prerequisites you could have an application without technical debt and have enough faith to be able to let a drunk hamster deploy your code
- Communication is key. Communicate what you can do but also what you can't.
- Most testers could with a bit effort learn to do basic security testing that a lot of us Think are really complicated and require deep technical knowledge. There are a lot of cool tools out there to try and even with just bug magnet or big list of naughty string you could do a lot more than nothing.
-Maintainability if crucial but requires a good strategy, testing on the right level, working together in the team on _all_ test levels and re-using good useful components/objects/modules/fixtures (call it whatever works in your context!)
But, as always: The best part is meeting other testers. Even though the schedule ended around 18 o'clock, the discussions and exchange of knowledge continued into the evening and even breakfast.
Met people I knew from before, had heard of and people completely new to me and I hope I get to meet each and everyone again.
Thank you all for being o generous with yourselves!
See you next time and don't be strangers, reach out whenever you want.
TalksPosted by pejgan 2018-02-16 22:57:53
Very weird (but nice) seeing yourself in print!
Apart from technical issues with screen resolution vs. Prezi, the presentation in general felt great. Got lovely feedback and great questions from the audience.
Presentation is available here:
TalksPosted by pejgan 2018-02-16 22:50:26
I had the great pleasure of returning to Swetugg and do another presentation this year.
I talked about ten areas where I often find bugs* and how to avoid them.
I talked about requirements, layout, only doing positive tests, not working as one team, assuming instead of analyzing and much more.
My presentation is available here:
*My definition here: Anything reducing customer value