Here are a couple of tips for manipulating the clock in Oracle’s VirtualBox.
I noticed this banner ad the other day from Mozy:
I’m working on a Python project that needs to run as root in order to work properly. Previously I’ve just run the whole PyCharm IDE as root, but this has some down-sides, and I think I have a better approach now.
Learning about cryptography can be discouraging. You get so bombarded by “don’t invent your own”, “you’re doing it wrong”, and “even really smart people screw this up” that you wonder why you even bother to try. For me, the answer is because if you don’t learn it, someone who knows even less than you will end up implementing it (badly, but with confidence) on your project. So despite being fraught with peril (because I am not an Expert), I’ll share a little about the concepts of public key cryptography in a form that has been helpful to me.
Software is kind of cool in that you can write programs that verify that your other programs work correctly. These testing “meta programs” tend to get short-shrift though, because it’s not like the code is actually part of the shipping product. So who cares about cleanliness or good style or all that stuff?
I’d propose that a good rule (or at least guideline) for test code is:
Treat it like production code.
That means things like:
- Keep methods short
- Name things appropriately
- Avoid duplication (like copying and pasting big chunks of code)
- Add comments if the code isn’t self-explanatory for some reason
- Readability counts
The main justification is that the tests need to be correct and maintainable too, and all the things we’ve learned about good code contribute to those objectives. A large body of unmaintainable tests starts to become a liability rather than an asset. The one concession I’d make is that tests pretty much never require their own tests, because then you have a problem of infinite recursion.
APIs are user interfaces for programmers. I came across a function recently that had a couple of user interface issues that serves as a great example of this principle.
The function is internal to an HTTP client class, taking care of the common logic independent of the HTTP method, and the signature looked something like this:
def _send(method, content_type): ...
This looks pretty straight-forward to programmers used to HTTP, but the parameters are deceiving.
First of all,
content_type is not actually a full content-type, but rather only the content subtype. In other words, it takes whatever you give it and prepends
'application/', so rather than passing in a content-type like
'application/json', you’d just pass in
'json'. That’s not a huge deal, but a programmer’s user experience would be that much smoother if the parameter name were
The other parameter,
method, you might expect to be a string indicating the HTTP method for the request. That would be wrong. It is actually a function object from the Python requests library, such as
requests.post, that will be used to actually send the request off. Maybe
method_function would be a better name.
Sure, sometimes you need a little context to understand that your first thought of what something is isn’t correct, and that the design choice is actually reasonable, but little details can prevent bugs, like slightly more precise parameter names.
As an example, the function above had a line like this in the middle of it, written by one of the developers who was reasonably familiar with the project:
if method == 'GET' or method == 'POST': ...