Like countless other people in South Africa, I have invested in an Inverter with a Lithium battery. The battery is a significant part of this investment, so it is essential to get as much runtime out of a single charge as possible.
My inverter is connected to my DB board, meaning all the sockets and appliances in my house are powered by the inverter during a power outage, which, in South Africa, happens frequently and for hours at a time.
Having “always on” power in all the sockets is great and super convenient, but not always smart. As winter encroaches, we start plugging in appliances that draw substantial amounts of power, like electric heaters and blankets. Some electric heaters can draw up to 2 kilowatts of power per hour. Given that the typical inverter system is paired to a 5-kilowatt battery, which means a single electric heater can almost drain the entire capacity in just two hours.
To solve this problem, I built a “smart extension cord” that turns off when the power fails and turns back on once the power returns.
Are you looking for a way to improve your organisation’s network authentication security, reliability, and performance? Look no further than radius proxy servers! By sitting between the client and radius server, these servers provide an additional layer of protection, help prevent attacks, and improve redundancy, scalability, and performance. In this blog post, we’ll dive into the benefits of using radius proxy servers and why they should be a crucial part of your IT infrastructure.
Radius servers are essential to many organisations’ IT infrastructure, providing a central point for managing network access and authentication. However, as with any critical component of an IT system, there are potential risks associated with relying solely on a single radius server. This is where the use of radius proxy servers comes in.
Load balancing is an important technique for scaling web applications and ensuring their reliability and availability. In a microservices architecture, a common pattern is to use an API gateway to route incoming requests to the appropriate service instances. However, as the traffic grows, a single API gateway instance may become a bottleneck and a single point of failure. To avoid these issues, we can use multiple API gateway instances and distribute the traffic among them using a load balancer.
In this post, we’ll explore how to set up a load-balanced API gateway using Node.js and Docker. We’ll use Docker Compose to define multiple instances of the API gateway and a load balancer, and we’ll configure the load balancer to distribute the traffic among the instances using a round-robin algorithm.
FreeRADIUS is a popular open-source RADIUS server that provides centralized AAA (authentication, authorization and accounting) services for network access. It can handle a large number of concurrent sessions and can be used for various purposes, including wireless network authentication and VPN access.
Load balancing is a crucial aspect of any network infrastructure, and it is even more critical when it comes to authentication and authorization systems like FreeRADIUS, where unavailability can result in the inability to connect to key networks or systems. Load balancing helps distribute the workload across multiple servers, improving performance and ensuring high availability.
Creating a high availability (HA) RADIUS cluster in the cloud is a complex but crucial step for ensuring that your network authentication and authorization services are always available to your customers. In this blog post, I will discuss the right way to create a HA RADIUS cluster in the cloud.
Are you tired of wasting time on mundane and repetitive tasks like responding to customer inquiries or creating marketing content? If so, then ChatGPT may be just the solution you’ve been looking for. In this blog post, we’ll explore what ChatGPT is, how it can be used by businesses to save time and resources, and the potential pitfalls of relying on this type of technology. Read on to find out more about the exciting possibilities of ChatGPT for your business, but be careful where you tread.
What is ChatGPT?
ChatGPT is a chatbot that uses a type of artificial intelligence called a language model to generate human-like responses to user input. It is based on the GPT (Generative Pre-training Transformer) model, which was developed by OpenAI and has achieved state-of-the-art results in a number of natural language processing tasks.
Like so many other people in South Africa, the unreliability of our power grid have forced me into taking the plunge and investing in an inverter with a lithium backup battery.
I used a raspberrypi to retrieve modbus data from the SunSynk inverter via a custom RS485 RJ45 to USB cable, logged the data into a MySQL table via a Nodejs app, and then used Grafana to display the data in a custom dashboard.
I have been coding for almost three decades, in a multitude of languages. The last decade has been spent in PHP, jQuery & Bootstrap. Pretty traditional web development stuff. This has worked well, but there are now better and more modern tech stacks out there. It is time for a change, so I am going to document my journey into the brave new world that is Nuxt, Vue & Node.js
This journey will be documented over a series of posts. In order to make it easy to follow I will tag them all with #lourneytonuxt
But before we dive into the technicalities, let’s get to the elephant in the room. Why Nuxt?
Coding a new feature is like running a marathon by doing a lot of short sprints.
The mistake we as developers often make is trying to achieve too much in one go. We try to complete all the work in the spec in one go, i.e. we disappear into a dark hole and only resurface once we are done. There is several reasons why this is a bad idea.
Only revealing what you have worked on when it is done means you cannot incorporate any feedback into your work while you code, which in turn means you will be re-doing a lot of it.
The longer you wait to release code the higher the chance that something external has changed that will result in you having to change some of what you have done.
Working in isolation means no collaboration, which in turn reduces innovation.
The best way to go is to release lots of small incremental changes (the sprint) instead of going all the way (the marathon) before releasing the code. This way you get feedback quickly, and should you be going off track you can get on the right patch quickly.
Divide your task into small chunks, and do these one at a time. Not only will you be able to better measure progress, but you will be able to collaborate much better, and the instant feedback will result in a much better result. This approach will also make it much easier for the person having to review, merge and release your code into the bigger product set.
How often have you though that you could quickly add a new feature to your system? If you ask me, the answer is many times.
How often did it turn out to be quick? For me, none so far.
Adding a new feature quickly implies the following assumptions:
You know exactly what you want the actual requirement is
You know exactly what needs to be done
You know exactly how this will impact the rest of the system
Most of the time at least one of these assumptions are wrong, more than often two or even three. This means a job that you thought would take a day turns into a week or even two.
Another seriously delaying factor is the dreaded scope creep. Often business wants more once they see what you have done. This can be managed, but what is far harder to control is the internal force that drives the top developers. We always want to over deliver, which means we keep adding more and more as we code, causing our own scope creep.
My rules to manage this is simple:
Rule 1. Write down exactly what you want to accomplish
Rule 2. Write down exactly how you are going to accomplish it
Rule 3. Stick to the two rules above, no matter what
If you decide halfway along that it would be a good idea to add X, Y or Z features, don’t. Resist all temptation. Refer to the rules above if confused. Only once your new feature is done can you consider doing more, but now you can step back and consider the bigger picture before you dive back into the code.