Test-n'-stuff.

Test-n'-stuff.

When live gives you lemons - don't let Bobby Drop-Tables steal them

TestingPosted by pejgan 2019-07-04 19:25:58

So my recent Power Hour at Ministry of Testing got me thinking.
To start: I'm sorry, tomorrow-morning-me, apparently I had to write this now, while on the rush of that experience.
Why am I feeling this rush? From a Q&A? Well. I'm doing a workshop about security testing and OWASP Juice Shop this fall and I've been worrying about it.
- Maybe it's too easy and scripted?
- Will people learn anything?
- What if people who know more than me show up and get disappointed?
- What if someone gets stuck and I can't help them?


Well the questions and reactions on the power hour (and discussions outside of that) have led me to realize a few things:
1. A lot of people are completely intimidated by the concept of security testing
2. A lot of the material out there expects a certain level of basic knowledge
3. And a lot of people can't make it over that first hurdle
4. I've got this.

Yes. A few people will find it too easy. Some might even find it too hard. But you know what? Others will find it exactly right. And then it's a win.

So. Let us back it up. I need to tell you a story.


My first direct contact with security testing was almost a decade ago. It was my first web project and in a security audit I learned about the concept of SQL injections for the first time. I was intrigued! I learned a bit, enough to make me able to check that it had been fixed, and added another tool to my tool belt without thinking about it.
Enough to understand Bobby Drop Table-jokes but not much more.
(Link to the comic: XKCD: Exploits of a mom)

In that same project I learned about testing SOAP and how to manipulate calls; and by doing that I made myself a little more versatile, again without realizing.

Next up was sessions and query string. I learned how to manipulate those and just how easily that could be exploited.
I think up until that point it felt like something only "true hackers" would do, and I didn't really put much weight on those types of testing but with variables it became so easy! I mean if you put something glaringly obvious in your URL, someone is bound to be curious and try to change it! Someone who might not even mean to do harm.


Along the years I've picked up small things, piece by piece, to put on my pearl necklace of security testing and for me a big change came a few years back. In that year I had been reading a lot of audit reports, due to my role at that time, and after the third or fourth I was confused and annoyed.
- Are they just sending in the same reports with switched title pages? Because they sure say the same thing.
- This is not rocket science. I understand this. I know how to check for these things.
- We are making the exact same errors as in that first web project of mine.


Which led to:
- WHY ARE THEY STILL CODING LIKE THIS?? THEY SHOULD KNOW BETTER!
- WHY ARE WE NOT TESTING THIS?? WE SHOULD KNOW BETTER!
- WHY... oh.... damn. Why am I not pushing for this? Why have I not made sure we test this?


Because if I can understand the reports I can figure out how to check it. If I can figure out how to check it I can find someone who knows how to fix it. If I can learn how to fix it and to check for it I can teach others. If I teach others they will grow and learn and somewhere along that line we might just stop introducing those errors over and over again.


And that is where my mission started. I want to find a way to make the bar low enough for anyone with basic software knowledge to take that first step. Learn that one thing. And then the next. And the next. I want people to understand that yes, it is a VERY complex area, and the difference between a TRUE expert and someone like me is huge, but that doesn't mean I can't make sure to check that the basic things that frankly all applications should make sure to protect against, are fixed, tested and in the end: never introduced in the first place. And THEN we can add some real penetration testing on top of that, by experts.
The workshop was born from that. And I hope after the workshop we can refine it, make it even better, keep building upon it and hopefully release it into the wild!


Because:
Security should be on our mind from step one and all the way to the end.
- It should be a factor in all requirements (such as: evil user stories, evil personas, threat modeling, just thinking "how could someone exploit this")
- It should be a factor in our coding (such as: programming guidelines, SonarQube, Whitesource bolt, ZAP, just thinking "how could someone exploit this")
- It should be a factor in our testing (such as: Bug Magnet, OWASP ZAP, Chrome DevTools security audit, Postman, just thinking "how could someone exploit this")
- It should be a factor in our operations (such as: monitoring and analyzing behaviour, configurations, audits, logging... I can keep going forever and ever)


Build it into your pipeline. Build it into your culture. Learn it. Preach it. Teach it.








  • Comments(0)//testing.pejgan.se/#post8

Testing vs. Checking – separate entities or part of a whole?

TestingPosted by pejgan 2019-03-12 12:42:51

There has been a lot of discussion, heated and calm, about the concept of testing. What it is. What it is not. Arguments about real testing vs. what a lot of people imagine when they hear the word. Twitter and blog posts talking about “real testing”, “the future of test”, “no testing” and so on, seemingly endlessly.

To be honest, I’ve had a hard time grasping a lot of it and some distinctions don’t make sense to me. But. I’ve also had revelations and that is what this blog post will be about.

So, let’s start. What is testing actually?

To me, testing = Checking + Exploration. Not one or the other, both. (Shoutout to Elisabeth Hendrickson and her awesome book Explore it!)

Let’s begin with checking. The stuff that some will say is not "real testing".

My opinion? Well. If you _know_ something is true/false, black/white, start/stop, by all means - check it. Go ahead, it's fine. Yes, I promise! If you _know_, from a reliable source, that you are supposed to be able to login to the site with credentials XYZ from device A then do it.

  • Is it important enough that you want to check it all the time? Automate it.
  • Is it time consuming enough that you notice yourself avoiding checking it manually? Automate it.
  • Is it something that would help the team to get instant feedback on when it's not working? Automate it.


Checking, to me, has an important part to play in what I call testing.

  • Important because knowing that I’ve checked the “obvious” gives me confidence to go further and deeper.
  • Important because knowing stuff works in the basic ways frees up headspace and time for more creative work.
  • Important because testing things in multiple different ways is more valuable than doing one way perfectly.
  • And let’s be honest: Important because it still find a lot of issues quickly.

Exploration, to me, is all the other parts. The stuff that might be hard to explain to someone else.

To me, exploration is about framing a hypothesis, inventing an experiment to try to _falsify_ it (yes, falsify and not prove), executing the experiment and then observe, analyze, learn, adapt.

  • It can lead to broken assumptions, a feeling of knowing less than before, Learning almost nothing useful. Which is ok, it will make me a better tester tomorrow.
  • It can lead to new stuff to check next time. Possibly/probably checks to automate. Which is awesome, it will allow me to do even more deep-diving next time.
  • It can lead to us having to re-think the entire solution. Which is GOOD, it will make the solution better.
  • It will find things checks can't possibly do because we didn't know about it and therefore couldn't check it. It will find risks, loopholes, cracks in the ice and give us information that can let us make better decisions.
  • Some will say it's more creative and fun, some will say it's "real testing", some will say it's a waste of time.

I say it's the stuff that makes me a very valuable asset in any development team. It also makes me worry too much. Imagine being unable to shut off the "hm. I wonder what could possibly go wrong with this scenario" at any point of time in your life, say for example while trying to look forward to travelling...

So, where am I going with this. Well I am trying to explain why I think we have such a big gap between the "Automate everything"-people and the "That's not real testing"-people.

I confess I struggle with models like the DevOps infinity-loop. It puts "TEST" in bold letters in an isolated part of the loop and if that is your view of the world, how can you ever take someone like me seriously? I find it seems impossible to find common ground without a long discussion about "what is test really?" and how do I explain that when I say test I don't mean executing a particular scenario in an application, I mean basically most of the things I do from the time I hear a user express a need (yes, I am horrible to talk to) to the time I stop working on a particular system.

  • I test the importance of that need by challenging it. Usually by talking to people.
  • I test the proposed solution by challenging it. Usually by asking questions or making explicit assumptions.
  • I test the process by challenging it. Usually by observing where people tend to take shortcuts or get flustered from not knowing how to proceed.
  • I test the solution under creation by challenging it. Usually by writing some sort of description of how I will test it and making as many people as possible tell me what I did wrong.
  • I test the solution when it's "done" by challenging it. Usually by a combination of checks (the stuff I _know_) and exploration (the stuff I _believe_).
  • I test the delivery by challenging it. Usually by analyzing data, either from an analytics tool or from actually _talking_ to people. (Did you know customer service tend to categorize calls? Do you know how much information you can get by just checking in with them after a release? It's awesome!)

So, if that is what I mean with testing and the person I talk to see it as writing some sort of test descriptions, executing them and then report the result... it will clash. And it does. Repeatedly.

Some will argue exploratory testing encompasses all the aspects I’ve mentioned and won’t approve of me separating them. They will argue exploratory testing is more a way of thinking of testing and I totally get it. I really do!

But. What I’ve come to realize is that to me, checking and exploration are two totally different mind sets that make me act and think differently. Yes, call them what you want but I need to do context-switching to change between them, just as I need to switch to another way of thinking when building something. And I do. I switch. And I need to do that switching in order to be the best possible tester I can be. So, to me they are all tools I use, for solving different tasks.


I’ll stop there for now. In the next blog post I’ll share an example from a previous assignment.

Cute kitten to calm any upset feelings:

Photo by Romane Van Troost on Unsplash







  • Comments(0)//testing.pejgan.se/#post7

CAST 2018 # 2: Recruiting for potential

TalksPosted by pejgan 2018-08-16 21:28:28

So, I thought I’d write up a post about the topic of my presentation at CAST 2018: Recruiting for Potential.

First of all, this will be a long post. For those of you that just want the slides, or want to check the slides before/after you read this post, they can be found here, along with a few of our cases: http://bit.ly/recruitingforpotential_all

Basically, I have grown uncomfortable with the standard recruiting process* over the years. No, let’s be honest: I’ve always disliked it, but I thought it was only me. The more I investigated it I realized it was not just my total inability of reading between lines and asking relevant follow-up questions, the whole process is biased and does nothing to promote diversity or increase the chance of hiring the best candidate.

Some of the things I noticed are kind of obvious, such as that confident, calm and positive people tended to have easier interviews and more often get call-backs. Especially people a bit too senior for the position (the safe bet) and people that were a lot like the interviewing person. Of course, there is a big impact coming from racism, sexism and all of the other horrible prejudice we have but there are so many other, more subtle biases involved!

I started reading up on bias and found a rather intimidating video speaking of how resumes, interviews, references and all the zillion tests we use have very little actual correlation to finding the right candidate. Our biases own us and until we acknowledge them, we won’t ever get past them. I can’t find the original video, but this is a great short one speaking about bias: https://www.youtube.com/watch?v=Q0ne13lv8hE

To try and get a bit fairer in my assessment and try and find the potential great tester rather than the person with the fanciest resume and best interviewing skill I decided to include a practical exercise as a part of the process. This is standard when recruiting for some roles, but I had never done it and I wanted to do it well.

One thing I didn’t want to do was an unprepared, on-site assignment. I know a lot of people like it but I think it (also) favours extroverts over introverts and frankly, I’m not a good enough coach yet to make it a good experience for the candidate. So, I decided to send out an assignment in advance and then discuss the results at the interview.

Next thing was that I wanted to look at more than “just” what tests the candidate could come up with. So, the case looks at multiple perspectives and parts that we think are important: analysis, planning, execution, storytelling and reporting.

We set up an application, created a background story to our fictional business and wrote a specification with a number of requirements.

Iteration number one looked something like this: http://bit.ly/CaseTester

It was an amazing experience! I found out so much about my own collection of biases and how important (and hard!) it is to find true passion in a person. We used this several times but as I grew more aware of bias and diversity issues, I started to see problems with it. The main one being that it took way too much time for the candidates. It doesn’t matter that you can solve it in a few hours and that we specifically told them to, people want to do a good job and spent way, way too much time. It had also never occurred to me that people might not have the time or equipment to do the exercise. They might not be able to afford it. My main takeaway is this: Respect other people. Make sure you pay for their time or make it no longer than a normal interview would be. Offer to provide whatever equipment they might need to complete the exercise.

Or: be very aware that you are pushing some candidates out the door.

This first case made for some amazing discussions though, and I still use it for other purposes such as training.

We adapted it for whatever role we were recruiting for (such as business analyst/requirement analyst, test lead etc) but the basic format looked the same. We had some unexpected reactions, such as candidates cancelling interviews because they thought the case did not match the role, but all-in-all it has been a really positive experience.

The latest iterations have tried to narrow and deepen the scope. When we were looking to recruit a tester to a team where it was important that the person has good coding skills and a deep technical knowledge we tried to ask a few specific questions that were meant to look for not only if the candidate can find bugs but also

- Do they do shallow or deep test?

- Can they tell a story?

- If they get the code, can they read it? Suggest a fix?

- Are they comfortable using tools?

- Can they suggest a good test for this bug fix?

The case can be found here: http://bit.ly/TechTester

In that interview we spoke a lot about what the candidates thought about things like automation, security testing, model based testing, load- and performance testing and the concept of “Automate all the things!”

This is my favourite so far and a format that I think we will stick to in future processes. Of course which questions we ask will differ with what is important to the team then and there (and what we see in the close future).

As CAST, the Q&A afterwards raised some interesting ideas and among those I will try to implement at least one: Giving the candidate the option of home- or onsite assignment. It shames me that I never thought of this myself but that’s bias for you! I hate onsite assignments and did not see that my preferred way might actually be a nightmare for someone else.

If you got this far: Thank you for your time!

* The standard process I am referring to might not looke the same in your country/culture but in Sweden it looks something like this: a recruiter or manager scans resumes. Then follows a number of interviews, how many depends on the company, and then often some kind of test(s). The tests might look at things like personality, motivation, performance or even IQ. As a last step the companies often contact a few references.





  • Comments(0)//testing.pejgan.se/#post6

CAST 2018 #1: Speedtesting Workshop

TalksPosted by pejgan 2018-08-09 21:29:29
So, the first in a row of posts about cast is a post about the workshop I had the pleasure of doing with my friend and colleague Tomas Rosenqvist.

The title was "Speedtesting using mind maps" and consisted of 90 intense minutes of interaction with the awesome "Bling R Us" webshop that we created some time ago as a recruitment exercise (I'll get back to that in the next post...)

The purpose of the workshop is to show how using a mind map template can give you a structure to your testing but still let you move fast.

The exercise was divided into 3 parts: Planning, execution and documentation and each part consisted of a short introduction, doing the actual stuff and then a debrief.

To summarize it, for a specific session:
- Set a mission
- Figure out a good approach
- Spell out all your (relevant) assumptions explicitly
- Make a really really rough plan for your
- Go for it!
- Take loads of notes
- Clean it up and reflect on your work
- Make sure you use as positive a language as possible to maximize the impact of your report

It was… absolutely AWESOME. I loved the energy and enthusiasm in the room and my only regret was having to interrupt people due to time restraint.


Slides, template (xMind), an example of a filled out template (xMind) and a pdf with the instructions and specification is available at https://bit.ly/SpeedTesting

Thank everyone who participated and if you try it out for yourself: please send us feedback and your result so we can iterate and improve!
smiley

@LenaPejgan & @muamaidbengt (Twitter)


  • Comments(0)//testing.pejgan.se/#post5

Slides from requirements conference

TalksPosted by pejgan 2018-06-22 09:27:02
So, May 16th I attended the 1-day workshop "Kravhantering på riktigt", a new Swedish Conference about requirement work and I was fortunate enough to be allowed to present how mind mapping and Visual models can help.

I've been speaking at dev. conference and testing conferences but this was my first time speking to a room full of requirements professionals, a nice twist!

Among other things, I spoke about the fear of the things we know we don't know and how that fear can hinder us. Instead of fearing it we should embrace that awareness and be happy about all the things we actually know!

I also spoke about the big pile of unspoken or unknown requirements and how our assumptions creates a lot of problems.


My take aways from the different talks is that we all spoke about the same challenges but from different perspectives. Some of our slides were even the same principles but used in different ways. Quite interesting!

My slides, in Swedish , can be found here:



  • Comments(0)//testing.pejgan.se/#post4

UK Star 2018

TalksPosted by pejgan 2018-03-13 19:31:15

Just came back from ukStar 2018 and I thought I'd summarize my experience while still fresh in memory.

So, first of all: I only had the change to attend 1 of 2 Days. Day 2 actually had a lot of interesting talks that I wish I could have attended but unfortunately I had to go back home.

Day 1 consisted of 2 keynotes, 1 workshop-session and a bunch of tracks.

First off: starting keynote by Christina Oharian was awesome. She is a great speaker and had a very interesting story to tell about how to grow a community. Great way to start the day! I felt very energized and got a few ideas to try out with out own competence forums.

Next off was workshops and I had chosen to attend Empowering People Through Play by Christina Oharian and Nicola Sedgwick. We got to try out a few games that can be used to get teams to talk, think about what they do or just energize people. I met my old friend the Lego Duck once again so of course I was happy :-D Biggest take-away from this was definitly when we had to draw a post card from instructions given by one in the team. Then describe the post card on a sheet of paper and then Another team had to draw out post card from our requirements.. Let's just say we did a quite poor job, even if we tried mnd maps. As I am a big fan of mind maps I felt a bit bad about the result but the biggest thing with Visual models is of course the iterative process from first rough sketch to refined model and we only had one shot. Oh well! It was great fun and I'm sure to try it out with one of my teams!

Next was a track about Quality of Things which was interesting but a bit to far away from my own work to really get everything. Was quite interesting to sew the problems that occur when you have to test things like smoke detectors!

The came Blockchain Applications And How To Test Them by Rhian Lewis, an Amazing woman I've talked to online in our Women in Testing-Group. She was so much fun to listen to! Great body languange, great energy and I had no idea block chains could be used for so many things! I actually want to try it out, even it I have no idea what for...

The middle of the day is a big blur. I had to stay out of a few tracks due to being overwhelmed with the amount of people and nerves for my own talk so after lunch I stayed and talked to Christina, Nicola and Dan Billing which was so much fun. Lovely people! Mentioning people I talked to I also have to mention Harry Girlea. He is 13(!) and already more experienced as a speaker than me. So wish I had gotten the chance to see his talk, can you believe once he is old enough to vote he will be considered a senior tester! A guy to keep an eye out for.

Ok. Next up was the fantastic Marianne Dujist with Self-Organizing the 24h GoForIT Innovation Challenge. A great inspiring talk about innovation and not keeping people back. You rock Marianne!

Then the trio of 8-minutes talks that I was part of was due and I had the privilege to finish off the talk after Kinga Witko (Make IT Accessable) and Rick Tracy (Testing Tests and where to test them). 8 minutes felt impossible when I started out. Remember, this talk is something I've done a lot in the last year but usually 30-60 minutes! I Think however I managed to strip all of the story and just keep the core. The what, how and why of Visual modelling.

After a few drinks with my lovely friends in Women in testing it was time to go back to the hotel and pack. Lovely seeing you all, can't wait for the next time.

My slides are available here:


I got a question about Writing a blog post about Visual modelling and I will try!



  • Comments(0)//testing.pejgan.se/#post3

TestSverige UnConference 2018

ConferencesPosted by pejgan 2018-02-18 20:37:49

So this weekend I attended TestSverige UnConference 2018 among with a group of other members. I've attended a few unconferences before (Agila Sverige as one example) but this was the first time with a really limited allowed number of attendees.

In the end we were 10 eager participants which might sound small but allowed for really passionate discussions (that might have spilled into dinner, after dinner and breakfast..)

First draft of schedule gave us 2 tracks with 2 sessions of 30 min between each bigger break but in the spirit of agile we decided only to plan until the first fika (coffee break) and then plan until lunch, then until next break and so on. After each session we had a quick recap from each group and then you chose your next session. Choosing which one to join was sometime painful as you might want to attend both...

Topics chosen were:

Model based testing

What does a test coach do?

The lone tester in an agile team

Test Culture vs. strategy

Drunk hamster deploy

Agent based testing

Catastrophic failures

Security testing

Motivation - yours and others

AI and testing

Maintainable tests

Deliberate practise

The future of TestSverige

Retro

I had brought my Moving motivators so I was glad to get a chance to discuss motivations. We each prioritized the cards and discussed each persons choices. An interesting point made was that we interpreted the terms really differently, which is the point of the generalized words and descriptions.

I encourage everyone to Think about their own motivations and be open to the fact that others probably value other things and see your movitations in another light.

Göran Bakken brought a stack of interesting books, Explore it! is one of my personal favourites

My main takeaways from the discussions are:

- Combining model-based, agent-based and AI/Machine Learning is a really interesting possible future that should be investigated.

- Given the right people, environments and prerequisites you could have an application without technical debt and have enough faith to be able to let a drunk hamster deploy your code

- Communication is key. Communicate what you can do but also what you can't.

- Most testers could with a bit effort learn to do basic security testing that a lot of us Think are really complicated and require deep technical knowledge. There are a lot of cool tools out there to try and even with just bug magnet or big list of naughty string you could do a lot more than nothing.

-Maintainability if crucial but requires a good strategy, testing on the right level, working together in the team on _all_ test levels and re-using good useful components/objects/modules/fixtures (call it whatever works in your context!)

But, as always: The best part is meeting other testers. Even though the schedule ended around 18 o'clock, the discussions and exchange of knowledge continued into the evening and even breakfast.

Met people I knew from before, had heard of and people completely new to me and I hope I get to meet each and everyone again.

Thank you all for being o generous with yourselves!


See you next time and don't be strangers, reach out whenever you want.



  • Comments(0)//testing.pejgan.se/#post2

SAST Q1 2018

TalksPosted by pejgan 2018-02-16 22:57:53

Very weird (but nice) seeing yourself in print!

Apart from technical issues with screen resolution vs. Prezi, the presentation in general felt great. Got lovely feedback and great questions from the audience.

Presentation is available here:





  • Comments(0)//testing.pejgan.se/#post1
Next »