Okay, here’s my take on sharing my experience with “new zealand rugby union fixtures,” like a seasoned blogger who just wants to share what they did.

Alright folks, so I decided to dive into grabbing some New Zealand rugby union fixture data. Why? Well, I’m a huge rugby fan, and I wanted to build a little something that would keep me updated without having to dig through various websites all the time.
First thing I did was scout around for data sources. I spent a good chunk of time just Googling different things like “new zealand rugby fixtures api,” “super rugby schedule,” etc. I ended up finding a few websites that seemed to have the info I needed, but nothing that was readily available as a clean API. Bummer.
So, plan B: web scraping. I know, I know, it’s not ideal, but sometimes you gotta do what you gotta do. I chose Python for this because it’s what I’m most comfortable with. I installed BeautifulSoup4
and requests
– my go-to scraping toolkit.
I picked a website that seemed to have a decent layout for the fixtures. Then, I started inspecting the HTML. Right-click, “Inspect Element” – you know the drill. I needed to figure out which HTML tags held the team names, dates, times, and locations. This part always takes the longest, because website structures can be messy!
Once I had a handle on the HTML structure, I wrote a Python script to:

- Fetch the webpage using
requests
. - Parse the HTML using
BeautifulSoup
. - Find all the relevant elements using CSS selectors (that’s where the “Inspect Element” work paid off).
- Extract the text from those elements and store it in a list of dictionaries. Each dictionary represented a single fixture.
The initial script was a mess, I’m not gonna lie. Got a lot of empty lists and “None” values. Had to tweak the CSS selectors and add some error handling to deal with variations in the website’s layout. Debugging is half the battle, right?
After getting the data, it was pretty raw. Dates were in weird formats, times were in NZ time (which is fine for me, but maybe not for others), and the team names weren’t always consistent. So, I wrote some more Python code to:
- Clean up the date and time formats using
datetime
. - Standardize the team names (e.g., “Crusaders” instead of “BNZ Crusaders”).
Finally, I wanted to store this data somewhere. For this little project, I just used a CSV file. Easy and quick. I used the csv
module in Python to write the cleaned-up fixture data to a CSV.
Now, I’ve got a CSV file with all the New Zealand rugby union fixtures. I can easily load this into a spreadsheet or use it in another script to send me notifications before games. Pretty cool!
Lessons learned? Web scraping can be a pain, but it’s a useful skill. Always respect the website’s file and don’t hammer their servers with requests. Also, be prepared to adjust your script when the website inevitably changes its layout. Data cleaning is crucial! Garbage in, garbage out, as they say.

This was a fun little project, and I’m already thinking about how to expand it. Maybe add support for other rugby leagues, or build a simple web app to display the fixtures. The possibilities are endless!