Planet Python
Last update: July 22, 2025 10:42 AM UTC
July 22, 2025
Test and Code
235: pytest-django - Adam Johnson
In this episode, special guest Adam Johnson joins the show and examines pytest-django, a popular plugin among Django developers. He highlights its advantages over the built-in unittest framework, including improved test management and debugging. Adam addresses transition challenges, evolving fixture practices, and offers tips for optimizing test performance. This episode is a concise guide for developers looking to enhance their testing strategies with pytest-django.
Links:
- pytest-django - a plugin for pytest that provides a set of useful tools for testing Django applications and projects.
Help support the show AND learn pytest:
- The Complete pytest course is now a bundle, with each part available separately.
- pytest Primary Power teaches the super powers of pytest that you need to learn to use pytest effectively.
- Using pytest with Projects has lots of "when you need it" sections like debugging failed tests, mocking, testing strategy, and CI
- Then pytest Booster Rockets can help with advanced parametrization and building plugins.
- Whether you need to get started with pytest today, or want to power up your pytest skills, PythonTest has a course for you.
death and gravity
When to use classes in Python? When you repeat similar sets of functions
Are you having trouble figuring out when to use classes or how to organize them?
Have you repeatedly searched for "when to use classes in Python", read all the articles and watched all the talks, and still don't know whether you should be using classes in any given situation?
Have you read discussions about it that for all you know may be right, but they're so academic you can't parse the jargon?
Have you read articles that all treat the "obvious" cases, leaving you with no clear answer when you try to apply them to your own code?
My experience is that, unfortunately, the best way to learn this is to look at lots of examples.
Most guidelines tend to either be too vague if you don't already know enough about the subject, or too specific and saying things you already know.
This is one of those things that once you get it seems obvious and intuitive, but it's not, and is quite difficult to explain properly.
So, instead of prescribing a general approach, let's look at:
- one specific case where you may want to use classes
- examples from real-world code
- some considerations you should keep in mind
- The heuristic
- Example: Retrievers
- Example: Flask's tagged JSON
- Formalizing this
- Counter-example: modules
- Try it out
The heuristic #
If you repeat similar sets of functions, consider grouping them in a class.
That's it.
In its most basic form, a class is when you group data with functions that operate on that data; sometimes, there is no data, but it can still be useful to group the functions into an abstract object that exists only to make things easier to use / understand.
Depending on whether you choose which class to use at runtime, this is sometimes called the strategy pattern.
Note
As Wikipedia puts it, "A heuristic is a practical way to solve a problem. It is better than chance, but does not always work. A person develops a heuristic by using intelligence, experience, and common sense."
So, this is not the correct thing to do all the time, or even most of the time.
Instead, I hope that this and other heuristics can help build the right intuition for people on their way from "I know the class syntax, now what?" to "proper" object-oriented design.
Example: Retrievers #
My feed reader library retrieves and stores web feeds (Atom, RSS and so on).
Usually, feeds come from the internet, but you can also use local files. The parsers for various formats don't really care where a feed is coming from, so they always take an open file as input.
reader supports conditional requests – that is, only retrieve a feed if it changed. To do this, it stores the ETag HTTP header from a response, and passes it back as the If-None-Match header of the next request; if nothing changed, the server can respond with 304 Not Modified instead of sending back the full content.
Let's have a look at how the code to retrieve feeds evolved over time; this version omits a few details, but it will end up with a structure similar to that of the full version. In the beginning, there was a function – URL and old ETag in, file and new ETag out:
def retrieve(url, etag=None):
if any(url.startswith(p) for p in ('http://', 'https://')):
headers = {}
if etag:
headers['If-None-Match'] = etag
response = requests.get(url, headers=headers, stream=True)
response.raise_for_status()
if response.status_code == 304:
response.close()
return None, etag
etag = response.headers.get('ETag', etag)
response.raw.decode_content = True
return response.raw, etag
# fall back to file
path = extract_path(url)
return open(path, 'rb'), None
We use Requests to get HTTP URLs, and return the underlying file-like object.1
For local files, we suport both bare paths and file URIs; for the latter, we do a bit of validation – file:feed and file://localhost/feed are OK, but file://invalid/feed and unknown:feed2 are not:
def extract_path(url):
url_parsed = urllib.parse.urlparse(url)
if url_parsed.scheme == 'file':
if url_parsed.netloc not in ('', 'localhost'):
raise ValueError("unknown authority for file URI")
return urllib.request.url2pathname(url_parsed.path)
if url_parsed.scheme:
raise ValueError("unknown scheme for file URI")
# no scheme, treat as a path
return url
Problem: can't add new feed sources #
One of reader's goals is to be extensible. For example, it should be possible to add new feed sources like an FTP server (ftp://...) or Twitter without changing reader code; however, our current implementation makes it hard to do so.
We can fix this by extracting retrieval logic into separate functions, one per protocol:
def http_retriever(url, etag):
headers = {}
# ...
return response.raw, etag
def file_retriever(url, etag):
path = extract_path(url)
return open(path, 'rb'), None
...and then routing to the right one depending on the URL prefix:
# sorted by key length (longest first)
RETRIEVERS = {
'https://': http_retriever,
'http://': http_retriever,
# fall back to file
'': file_retriever,
}
def get_retriever(url):
for prefix, retriever in RETRIEVERS.items():
if url.lower().startswith(prefix.lower()):
return retriever
raise ValueError("no retriever for URL")
def retrieve(url, etag=None):
retriever = get_retriever(url)
return retriever(url, etag)
Now, plugins can register retrievers by adding them to RETRIEVERS
(in practice, there's a method for that,
so users don't need to care about it staying sorted).
Problem: can't validate URLs until retrieving them #
To add a feed, you call add_feed() with the feed URL.
But what if you pass an invalid URL? The feed gets stored in the database, and you get an "unknown scheme for file URI" error on the next update. However, this can be confusing – a good API should signal errors near the action that triggered them. This means add_feed() needs to validate the URL without actually retrieving it.
For HTTP, Requests can do the validation for us;
for files, we can call extract_path()
and ignore the result.
Of course, we should select the appropriate logic in the same way we select retrievers,
otherwise we're back where we started.
Now, there's more than one way of doing this. We could keep a separate validator registry, but that may accidentally become out of sync with the retriever one.
URL_VALIDATORS = {
'https://': http_url_validator,
'http://': http_url_validator,
'': file_url_validator,
}
Or, we could keep a (retriever, validator) pair in the retriever registry. This is better, but it's not all that readable (what if need to add a third thing?); also, it makes customizing behavior that affects both the retriever and validator harder.
RETRIEVERS = {
'https://': (http_retriever, http_url_validator),
'http://': (http_retriever, http_url_validator),
'': (file_retriever, file_url_validator),
}
Better yet, we can use a class to make the grouping explicit:
class HTTPRetriever:
def retrieve(self, url, etag):
headers = {}
# ...
return response.raw, etag
def validate_url(self, url):
session = requests.Session()
session.get_adapter(url)
session.prepare_request(requests.Request('GET', url))
class FileRetriever:
def retrieve(self, url, etag):
path = extract_path(url)
return open(path, 'rb'), None
def validate_url(self, url):
extract_path(url)
We then instantiate them,
and update retrieve()
to call the methods:
http_retriever = HTTPRetriever()
file_retriever = FileRetriever()
def retrieve(url, etag=None):
retriever = get_retriever(url)
return retriever.retrieve(url, etag)
validate_url()
works just the same:
def validate_url(url):
retriever = get_retriever(url)
retriever.validate_url(url)
And there you have it – if you repeat similar sets of functions, consider grouping them in a class.
Not just functions, attributes too #
Say you want to update feeds in parallel, using multiple threads.
Retrieving feeds is mostly waiting around for I/O, so it will benefit the most from it. Parsing, on the other hand, is pure Python, CPU bound code, so threads won't help due to the global interpreter lock.
However, because we're streaming the reponse body,
I/O is not done when the retriever returns the file,
but when the parser finishes reading it.3
We can move all the (network) I/O in retrieve()
by reading the response into a temporary file
and returning it instead.
We'll allow any retriever to opt into this behavior by using a class attribute:
class HTTPRetriever:
slow_to_read = True
class FileRetriever:
slow_to_read = False
If a retriever is slow to read, retrieve()
does the swap:
def retrieve(url, etag=None):
retriever = get_retriever(url)
file, etag = retriever.retrieve(url, etag)
if file and retriever.slow_to_read:
temp = tempfile.TemporaryFile()
shutil.copyfileobj(file, temp)
file.close()
temp.seek(0)
file = temp
return file, etag
Example: Flask's tagged JSON #
The Flask web framework provides an extendable compact representation for non-standard JSON types called tagged JSON (code). The serializer class delegates most conversion work to methods of various JSONTag subclasses (one per supported type):
check()
checks if a Python value should be tagged by that tagtag()
converts it to tagged JSONto_python()
converts a JSON value back to Python (the serializer uses thekey
tag attribute to find the correct tag)
Interestingly, tag instances have an attribute pointing back to the serializer, likely to allow recursion – when (un)packing a possibly nested collection, you need to recursively (un)pack its values. Passing the serializer to each method would have also worked, but when your functions take the same arguments...
Formalizing this #
OK, the retriever code works.
But, how should you communicate to others
(readers, implementers, interpreters, type checkers)
that an HTTPRetriever is the same kind of thing as a FileRetriever,
and as anything else that can go in RETRIEVERS
?
Duck typing #
Here's the definition of duck typing:
A programming style which does not look at an object's type to determine if it has the right interface; instead, the method or attribute is simply called or used ("If it looks like a duck and quacks like a duck, it must be a duck.") [...]
This is what we're doing now! If it retrieves like a retriever and validates URLs like a retriever, then it's a retriever.
You see this all the time in Python. For example, json.dump() takes a file-like object; now, the full text file interface has lots methods and attributes, but dump() only cares about write(), and will accept any object implementing it:
>>> class MyFile:
... def write(self, s):
... print(f"writing: {s}")
...
>>> f = MyFile()
>>> json.dump({'one': 1}, f)
writing: {
writing: "one"
writing: :
writing: 1
writing: }
The main way to communicate this is through documentation:
Serialize obj [...] to fp (a
.write()
-supporting file-like object)
Inheritance #
Nevertheless, you may want to be more explicit about the relationships between types. The easiest option is to use a base class, and require retrievers to inherit from it.
class Retriever:
slow_to_read = False
def retrieve(self, url, etag):
raise NotImplementedError
def validate_url(self, url):
raise NotImplementedError
This allows you to check you the type with isinstance(), provide default methods and attributes, and will help type checkers and autocompletion, at the expense of forcing a dependency on the base class.
>>> class MyRetriever(Retriever): pass
>>> retriever = MyRetriever()
>>> retriever.slow_to_read
False
>>> isinstance(retriever, Retriever)
True
What it won't do is check subclasses actually define the methods:
>>> retriever.validate_url('myurl')
Traceback (most recent call last):
...
NotImplementedError
Abstract base classes #
This is where abstract base classes come in. The decorators in the abc module allow defining abstract methods that must be overriden:
class Retriever(ABC):
@abstractproperty
def slow_to_read(self):
return False
@abstractmethod
def retrieve(self, url, etag):
raise NotImplementedError
@abstractmethod
def validate_url(self, url):
raise NotImplementedError
This is checked at runtime (but only that methods and attributes are present, not their signatures or types):
>>> class MyRetriever(Retriever): pass
>>> MyRetriever()
Traceback (most recent call last):
...
TypeError: Can't instantiate abstract class MyRetriever with abstract methods retrieve, slow_to_read, validate_url
>>> class MyRetriever(Retriever):
... slow_to_read = False
... def retrieve(self, url, etag): ...
... def validate_url(self, url): ...
...
>>> MyRetriever()
<__main__.MyRetriever object at 0x1037aac50>
Tip
You can also use ABCs to register arbitrary types as "virtual subclasses"; this allows them to pass isinstance() checks without inheritance, but won't check for required methods:
>>> class MyRetriever: pass
>>> Retriever.register(MyRetriever)
<class '__main__.MyRetriever'>
>>> isinstance(MyRetriever(), Retriever)
True
Protocols #
Finally, we have protocols, aka structural subtyping, aka static duck typing. Introduced in PEP 544, they go in the opposite direction – what if instead declaring what the type of something is, we declare what methods it has to have to be of a specific type?
You define a protocol by inheriting typing.Protocol:
class Retriever(Protocol):
@property
def slow_to_read(self) -> bool:
...
def retrieve(self, url: str, etag: str | None) -> tuple[IO[bytes] | None, str | None]:
...
def validate_url(self, url: str) -> None:
...
...and then use it in type annotations:
def mount_retriever(prefix: str, retriever: Retriever) -> None:
raise NotImplementedError
Some other code (not necessarily yours, not necessarily aware the protocol even exists) defines an implementation:
class MyRetriever:
slow_to_read = False
def validate_url(self):
pass
...and then uses it with annotated code:
mount_retriever('my', MyRetriever())
A type checker like mypy will check if the provided instance conforms to the protocol – not only that methods exist, but that their signatures are correct too – all without the implementation having to declare anything.
$ mypy myproto.py
myproto.py:11: error: Argument 2 to "mount_retriever" has incompatible type "MyRetriever"; expected "Retriever" [arg-type]
myproto.py:11: note: "MyRetriever" is missing following "Retriever" protocol member:
myproto.py:11: note: retrieve
myproto.py:11: note: Following member(s) of "MyRetriever" have conflicts:
myproto.py:11: note: Expected:
myproto.py:11: note: def validate_url(self, url: str) -> None
myproto.py:11: note: Got:
myproto.py:11: note: def validate_url(self) -> Any
Found 1 error in 1 file (checked 1 source file)
Tip
If you decorate your protocol with runtime_checkable, you can use it in isinstance() checks, but like ABCs, it only checks methods are present.
Counter-example: modules #
If a class has no state and you don't need inheritance, you can use a module instead:
# module.py
slow_to_read = False
def retrieve(url, etag):
raise NotImplementedError
def validate_url(url):
raise NotImplementedError
From a duck typing perspective, this is a valid retriever, since it has all the expected methods and attributes. So much so, that it's also compatible with protocols:
import module
mount_retriever('mod', module)
$ mypy module.py
Success: no issues found in 1 source file
I tried to keep the retriever example stateless, but real world classes rarely are (it may be immutable state, but it's state nonetheless). Also, you're limited to exactly one implementation per module, which is usually too much like Java for my taste.
Tip
For a somewhat forced, but illustrative example of a stateful concurrent.futures executor implemented like this, and a comparison with class-based alternatives, check out Inheritance over composition, sometimes.
Try it out #
If you're doing something and you think you need a class, do it and see how it looks. If you think it's better, keep it, otherwise, revert the change. You can always switch in either direction later.
If you got it right the first time, great! If not, by having to fix it you'll learn something, and next time you'll know better.
Also, don't beat yourself up.
Sure, there are nice libraries out there that use classes in just the right way, after spending lots of time to find the right abstraction. But abstraction is difficult and time consuming, and in everyday code good enough is just that – good enough – you don't need to go to the extreme.
Learned something new today? Share this with others, it really helps!
Want to know when new articles come out? Subscribe here to get new stuff straight to your inbox!
This code has a potential bug: if we were using a persistent session instead of a transient one, the connection would never be released, since we're not closing the response after we're done with it. In the actual code, we're doing both, but the only way do so reliably is to return a context manager; I omitted this because it doesn't add anything to our discussion about classes. [return]
We're handling unknown URI schemes here because bare paths don't have a scheme, so anything that didn't match a known scheme must be a bare path. Also, on Windows (not supported yet), the drive letter in a path like c:\feed.xml is indistinguishable from a scheme. [return]
Unless the response is small enough to fit in the TCP receive buffer. [return]
July 21, 2025
Real Python
What Does isinstance() Do in Python?
Python’s isinstance()
function helps you determine if an object is an instance of a specified class or its superclass, aiding in writing cleaner and more robust code. You use it to confirm that function parameters are of the expected types, allowing you to handle type-related issues preemptively. This tutorial explores how isinstance()
works, its use with subclasses, and how it differs from type()
.
By the end of this tutorial, you’ll understand that:
isinstance()
checks if an object is a member of a class or superclass.type()
checks an object’s specific class, whileisinstance()
considers inheritance.isinstance()
correctly identifies instances of subclasses.- There’s an important difference between
isinstance()
andtype()
.
Exploring isinstance()
will deepen your understanding of the objects you work with and help you write more robust, error-free code.
To get the most out of this tutorial, it’s recommended that you have a basic understanding of object-oriented programming. More specifically, you should understand the concepts of classes, objects—also known as instances—and inheritance.
For this tutorial, you’ll mostly use the Python REPL and some Python files. You won’t need to install any libraries since everything you’ll need is part of core Python. All the code examples are provided in the downloadable materials, and you can access these by clicking the link below:
Get Your Code: Click here to download the free sample code that you’ll use to learn about isinstance() in Python.
Take the Quiz: Test your knowledge with our interactive “What Does isinstance() Do in Python?” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
What Does isinstance() Do in Python?Take this quiz to learn how Python's isinstance() introspection function reveals object classes and why it might not always show what you expect.
It’s time to start this learning journey, where you’ll discover the nature of the objects you use in your code.
Why Would You Use the Python isinstance()
Function?
The isinstance()
function determines whether an object is an instance of a class. It also detects whether the object is an instance of a superclass. To use isinstance()
, you pass it two arguments:
- The instance you want to analyze
- The class you want to compare the instance against
These arguments must only be passed by position, not by keyword.
If the object you pass as the first argument is an instance of the class you pass as the second argument, then isinstance()
returns True
. Otherwise, it returns False
.
Note: You’ll commonly see the terms object and instance used interchangeably. This is perfectly correct, but remembering that an object is an instance of a class can help you see the relationship between the two more clearly.
When you first start learning Python, you’re told that objects are everywhere. Does this mean that every integer, string, list, or function you come across is an object? Yes, it does! In the code below, you’ll analyze some basic data types:
>>> shape = "sphere"
>>> number = 8
>>> isinstance(shape, str)
True
>>> isinstance(number, int)
True
>>> isinstance(number, float)
False
You create two variables, shape
and number
, which hold str
and int
objects, respectively. You then pass shape
and str
to the first call of isinstance()
to prove this. The isinstance()
function returns True
, showing that "sphere"
is indeed a string.
Next, you pass number
and int
to the second call to isinstance()
, which also returns True
. This tells you 8
is an integer. The third call returns False
because 8
isn’t a floating-point number.
Knowing the type of data you’re passing to a function is essential to prevent problems caused by invalid types. While it’s better to avoid passing incorrect data in the first place, using isinstance()
gives you a way to avert any undesirable consequences.
Take a look at the code below:
>>> def calculate_area(length, breadth):
... return length * breadth
>>> calculate_area(5, 3)
15
>>> calculate_area(5, "3")
'33333'
Your function takes two numeric values, multiplies them, and returns the answer. Your function works, but only if you pass it two numbers. If you pass it a number and a string, your code won’t crash, but it won’t do what you expect either.
The string gets replicated when you pass a string and an integer to the multiplication operator (*
). In this case, the "3"
gets replicated five times to form "33333"
, which probably isn’t the result you expected.
Things get worse when you pass in two strings:
Read the full article at https://realpython.com/what-does-isinstance-do-in-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Bytes
#441 It's Michaels All the Way Down
<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* Distributed sqlite follow up: <a href="https://turso.tech?featured_on=pythonbytes">Turso</a> and <a href="https://litestream.io?featured_on=pythonbytes">Litestream</a></em>*</li> <li><em>* <a href="https://peps.python.org/pep-0792/?featured_on=pythonbytes">PEP 792 – Project status markers in the simple index</a></em>*</li> <li><strong><a href="https://hugovk.dev/blog/2025/run-coverage-on-tests/?featured_on=pythonbytes">Run coverage on tests</a></strong></li> <li><strong><a href="https://github.com/rzane/docker2exe?featured_on=pythonbytes">docker2exe</a>: Convert a Docker image to an executable</strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=U8K-NBsGCGc' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="441">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by Digital Ocean: <a href="https://pythonbytes.fm/digitalocean-gen-ai"><strong>pythonbytes.fm/digitalocean-gen-ai</strong></a> Use code <strong>DO4BYTES</strong> and get $200 in free credit</p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: Distributed sqlite follow up: <a href="https://turso.tech?featured_on=pythonbytes">Turso</a> and <a href="https://litestream.io?featured_on=pythonbytes">Litestream</a></strong></p> <ul> <li>Michael Booth: <ul> <li><a href="https://turso.tech?featured_on=pythonbytes">Turso</a> marries the familiarity and simplicity of SQLite with modern, scalable, and distributed features.</li> <li>Seems to me that Turso is to SQLite what MotherDuck is to DuckDB.</li> </ul></li> <li>Mike Fiedler <ul> <li>Continue to use the SQLite you love and care about (even the one inside Python runtime) and launch a daemon that watches the db for changes and replicates changes to an S3-type object store.</li> <li>Deeper dive: <a href="https://fly.io/blog/litestream-revamped/?featured_on=pythonbytes">Litestream: Revamped</a></li> </ul></li> </ul> <p><strong>Brian #2: <a href="https://peps.python.org/pep-0792/?featured_on=pythonbytes">PEP 792 – Project status markers in the simple index</a></strong></p> <ul> <li>Currently 3 status markers for packages <ul> <li>Trove Classifier status</li> <li>Indices can be yanked</li> <li>PyPI projects - admins can quarantine a project, owners can archive a project</li> </ul></li> <li>Proposal is to have something that can have only one state <ul> <li>active</li> <li>archived</li> <li>quarantined</li> <li>deprecated</li> </ul></li> <li>This has been Approved, but not Implemented yet.</li> </ul> <p><strong>Brian #3:</strong> <a href="https://hugovk.dev/blog/2025/run-coverage-on-tests/?featured_on=pythonbytes">Run coverage on tests</a></p> <ul> <li>Hugo van Kemenade</li> <li>And apparently, run Ruff with at least F811 turned on</li> <li>Helps with copy/paste/modify mistakes, but also subtler bugs like consumed generators being reused.</li> </ul> <p><strong>Michael #4:</strong> <a href="https://github.com/rzane/docker2exe?featured_on=pythonbytes">docker2exe</a>: Convert a Docker image to an executable</p> <ul> <li>This tool can be used to convert a Docker image to an executable that you can send to your friends.</li> <li>Build with a simple command: <code>$ docker2exe --name alpine --image alpine:3.9</code></li> <li>Requires docker on the client device</li> <li>Probably doesn’t map volumes/ports/etc, though could potentially be exposed in the dockerfile.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li>Back catalog of Test & Code is now on YouTube under @TestAndCodePodcast <ul> <li>So far 106 of 234 episodes are up. The rest are going up according to daily limits.</li> <li>Ordering is rather chaotic, according to upload time, not release ordering.</li> </ul></li> <li>There will be a new episode this week <ul> <li>pytest-django with Adam Johnson</li> </ul></li> </ul> <p><strong>Joke: <a href="https://x.com/PR0GRAMMERHUM0R/status/1939806175475765389?featured_on=pythonbytes">If programmers were doctors</a></strong></p>
Daniel Roy Greenfeld
uv run for running tests on versions of Python
The uv library is not just useful for dependency management, it also comes with a run
subcommand that doesn't just run Python scripts, it allows for specific Python versions and setting of dependencies within that run. Between runs it caches everything so it runs fast.
For example, if I have a FastAPI project I could run tests on it using this command:
uv run --with pytest --with httpx pytest
But what if I want to test a particular version of Python? Then I simple specify the version of Python to run the test:
uv run --python=3.13 --with pytest --with httpx pytest
Here's where it gets fun. I can use a Makefile
(or a justfile) to test on multiple Python versions.
testall: ## Run all the tests for all the supported Python versions
uv run --python=3.10 --with pytest --with httpx pytest
uv run --python=3.11 --with pytest --with httpx pytest
uv run --python=3.12 --with pytest --with httpx pytest
uv run --python=3.13 --with pytest --with httpx pytest
If you want to use pyproject.toml
dependency groups, switch from the --with
flag to the -extra
flag. For example, if your testing dependencies are in a test
group:
[project.optional-dependencies]
test = [
# For the test client
"httpx>=0.28.1",
# Test runner
"pytest>=8.4.0",
]
You could then run tests across multiple versions of Python thus:
testall: ## Run all the tests for all the supported Python versions
uv run --python=3.10 --extra test pytest
uv run --python=3.11 --extra test pytest
uv run --python=3.12 --extra test pytest
uv run --python=3.13 --extra test pytest
And there you have it, a simple replacement for Nox or Tox. Of course those tools have lots more features that some users may care about. However, for my needs this works great and eliminates a dependency+configuration from a number of my projects.
Thanks to Audrey Roy Greenfeld for pairing with me on getting this to work.
July 20, 2025
Go Deh
All Truth in Truthtables!
(Best viewed on a larger than phone screen)
To crib from my RosettaCode tasks description and examples:
A truth table is a display of the inputs to, and the output of a Boolean equation organised as a table where each row gives one combination of input values and the corresponding value of the equation.
And as examples:
Boolean expression: A ^ B A B : A ^ B 0 0 : 0 0 1 : 1 1 0 : 1 1 1 : 0 Boolean expression: S | ( T ^ U ) S T U : S | ( T ^ U ) 0 0 0 : 0 0 0 1 : 1 0 1 0 : 1 0 1 1 : 0 1 0 0 : 1 1 0 1 : 1 1 1 0 : 1 1 1 1 : 1
Format
A truth table has a header row of columns showing first the names of inputs assigned to each column; a visual separator - e.g. ':'; then the column name for the output result.
The body of the table, under the inputs section, contains rows of all binary combinations of the inputs. It is usually arranged as each row of the input section being a binary count from zero to 2**input_count - 1
The body of the table, under the result section, contains rows showing the binary output produced from the input configuration in the same row, to the left.
Format used
I am interested in the number of inputs rather than their names so will show vector i with the most significant indices to the left, (so the binary count in the input sections body looks right).
Similarly I am interested in the bits in the result column rather than a name so will just call the result column r.
From one result to many
Here's the invocation, and truth tables produced for some simple boolean operators:
OR i[1] i[0] : r ================= 0 0 : 0 0 1 : 1 1 0 : 1 1 1 : 1
XOR
i[1] i[0] : r
=================
0 0 : 0
0 1 : 1
1 0 : 1
1 1 : 0
AND i[1] i[0] : r ================= 0 0 : 0 0 1 : 0 1 0 : 0 1 1 : 1
For those three inputs, we can extend the table to show result columns for OR, XOR and then AND, like this:
OR, XOR, then AND result *columns* i[1] i[0] : r[0] r[1] r[2] =========================== 0 0 : 0 0 0 0 1 : 1 1 0 1 0 : 1 1 0 1 1 : 1 0 1
All Truth
Just how many results are possible?
Well, i = 2 inputs gives 2**i = 4 possible input boolean combinations; so a result column has 2**i = 4 bits.
The number of different result columns is therefore 2**(2**i) = 2**4 = 16
We can show all possible results by successive results being a binary count, but this time by column in the results section, (with the LSB being closest to the header row)
The pp_table function automatically generates all possible results if a second parameter of None is used
All Truths of two inputs! i[1] i[0] : r[0] r[1] r[2] r[3] r[4] r[5] r[6] r[7] r[8] r[9] r[10] r[11] r[12] r[13] r[14] r[15] ============================================================================================================== 0 0 : 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 : 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 1 0 : 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 1 : 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
We might say that it shows all possible truths for up-to-and-including two inputs. That is because results include outputs not dependent on any or all of those two inputs. For example r[0], and r[15] do not depend on any input as they give constant outputs of 0 and 1, respectively. r[5] is simply ~i[0] and does not depend on i[1]
It's Big, Oh!
The results grow as 2**(2**i), sometimes called double exponential growth! it gets large, quickly!!
i | 2**(2**i) --|---------- 0 | 2 1 | 4 2 | 16 3 | 256 4 | 65_536 5 | 4_294_967_296
The code
Contemplating using AI and needing to get it to understand what I wanted, as well as endless prompting to get it to do what I want, the way I wanted it; I decided on writing it all by myself - I knew I wanted it just so, and the coding would be nothing new to me, just nailing what I wanted to show.
END.
Armin Ronacher
Welcoming The Next Generation of Programmers
This post is addressed to the Python community, one I am glad to be a member of.
I’m product of my community. A decade ago I wrote about how much I owed the Python community. Recently I found myself reminiscing again. This year at EuroPython I even gave a brief lightning talk recalling my time in the community — it made me tear up a little.
There were two reasons for this trip down memory lane. First, I had the opportunity to be part of the new Python documentary, which brought back a flood of memories (good and bad). Second, I’ve found myself accidentally pulled towards agentic coding and vibe coders1. Over the last month and a half I have spoken with so many people on AI and programming and realized that a growing number of them are people I might not, in the past, have described as “programmers.” Even on the way to the conference I had the pleasure to engage in a multi-hour discussion on the train with an air traffic controller who ventured into programming because of ChatGPT to make his life easier.
I’m not sure where I first heard it, but I like the idea that you are what you do. If you’re painting (even your very first painting) you are a painter. Consequently if you create a program, by hand or with the aid of an agent, you are a programmer. Many people become programmers essentially overnight by picking up one of these tools.
Heading to EuroPython this year I worried that the community that shaped me might not be receptive to AI and agentic programming. Some of that fear felt warranted: over the last year I saw a number of dismissive posts in my circles about using AI for programming. Yet I have also come to realize that acceptance of AI has shifted significantly. More importantly there is pretty wide support of the notion that newcomers will and should be writing AI-generated code.
That matters, because my view is that AI will not lead to fewer programmers. In fact, the opposite seems likely. AI will bring more people into programming than anything else we have done in the last decade.
For the Python community in particular, this is a moment to reflect. Python has demonstrated its inclusivity repeatedly — think of how many people have become successful software engineers through outreach programs (like PyLadies) and community support. I myself can credit much of my early carreer from learning from others on the Python IRC channels.
We need to pay close attention to vibe coding. And that not because it might produce lower‑quality code, but because if we don’t intentionally welcome the next generation learning through these tools, they will miss out on important lessons many of us learned the hard way. It would be a mistake to treat them as outcasts or “not real” programmers. Remember that many of our first programs did not have functions, were a mess of GOTO and things copy/pasted together.
Every day someone becomes a programmer because they figured out how to make ChatGPT build something. Lucky for us: in many of those cases the AI picks Python. We should treat this as an opportunity and anticipate an expansion in the kinds of people who might want to attend a Python conference. Yet many of these new programmers are not even aware that programming communities and conferences exist. It’s in the Python community’s interest to find ways to pull them in.
Consider this: I can name the person who brought me into Python. But if you were brought in via ChatGPT or a programming agent, there may be no human there — just the AI. That lack of human connection is, I think, the biggest downside. So we will need to compensate: to reach out, to mentor, to create on‑ramps. To instil the idea that you should be looking for a community, because the AI won’t do that. We need to turn a solitary interaction with an AI into a shared journey with a community, and to move them towards learning the important lessons about engineering. We do not want to have a generation of developers held captive by a companies building vibe-coding tools with little incentive for their users to break from those shackles.
-
I’m using vibe coders here as people that give in to having the machine program for them. I believe that many programmers will start in this way before they transition to more traditional software engineering.↩
July 18, 2025
Mike Driscoll
Announcing Squall: A TUI SQLite Editor
Squall is a SQLite viewer and editor that runs in your terminal. Squall is written in Python and uses the Textual package. Squall allows you to view and edit SQLite databases using SQL. You can check out the code on GitHub.
Here is what Squall looks like using the Chinook database:
Currently, there is only one command-line option: -f
or --filename
, which allows you to pass a database path to Squall to load.
Example Usage:
squall -f path/to/database.sqlite
The instructions assume you have uv or pip installed.
uv tool install squall_sql
uv tool install git+https://github.com/driscollis/squall
If you want to upgrade to the latest version of Squall SQL, then you will want to run one of the following commands.
uv tool install git+https://github.com/driscollis/squall -U --force
pip install squall-sql
If you have cloned the package and want to run Squall, one way to do so is to navigate to the cloned repository on your hard drive using your Terminal. Then run the following command while inside the src
folder:
python -m squall.squall
The post Announcing Squall: A TUI SQLite Editor appeared first on Mouse Vs Python.
The Python Coding Stack
Do You Really Know How `or` And `and` Work in Python?
Let's start with an easy question. Play along, please. I know you know how to use the or
keyword, just bear with me for a bit…
Have you answered? If you haven't, please do, even if this is a simple question for you.
…Have you submitted your answer now?
I often ask this question when running live courses, and people are a bit hesitant to answer because it seems to be such a simple, even trivial, question. Most people eventually answer: True
.
OK, let's dive further into how or
works, and we'll also explore and
in this article.
or
You may not have felt the need to cheat when answering the question above. But you could have just opened your Python REPL and typed in the expression. Let's try it:

Wait. What?!
The output is not True
. Why 5
? Let's try it again with different operands:
Hmm?!
Truthy and Falsy
Let's review the concept of truthiness in Python. Every Python object is either truthy or falsy. When you pass a truthy object to the built-in bool()
, you get True
. And, you guessed it, you'll get False
when you pass a falsy object to bool()
.
In situations where Python is expecting a True
or False
, such as after the if
or while
keywords, Python will use the object's truthiness value if the object isn't a Boolean (True
or False
).
Back to or
Let's get back to the expression 5 or 0
. The integer 5
is truthy. You can confirm this by running bool(5)
, which returns True
. But 0
is falsy. In fact, 0
is the only falsy integer. Every other integer is truthy. Therefore, 5 or 0
should behave like True
. If you write if 5 or 0:
, you'll expect Python to execute the block of code after the if
statement. And it does.
But you've seen that 5 or 0
evaluates to 5
. And 5
is not True
. But it's truthy. So, the statement if 5 or 0:
becomes if 5:
, and since 5
is truthy, this behaves as if it were if True:
.
But why does 5 or 0
give you 5
?
or
Only Needs One Truthy Value
The or
keyword is looking at its two operands, the one before and the one after the or
keyword. It only needs one of them to be true (by which I mean truthy) for the whole expression to be true (truthy).
So, what happens when you run the expression 5 or 0
? Python looks at the first operand, which is 5
. It's truthy, so the or
expression simply gives back this value. It doesn't need to bother with the second operand because if the first operand is truthy, the value of the second operand is irrelevant. Recall that or
only needs one operand to be truthy. It doesn't matter if only one or both operands are truthy.
So, what happens if the first operand is falsy?
The first of these expressions has one truthy and one falsy operand. But the first operand, 0
, is falsy. Therefore, the or
expression must look at the second operand. It's truthy. The or
expression gives back the second operand. Therefore, the output of the or
expression is truthy. Great.
But the or
expression doesn't return the second operand because the second operand is truthy. Instead, it returns the second operand because the first operand is falsy.
When the first operand in an or
expression is falsy, the result of the or
expression is determined solely by the second operand. If the second operand is truthy, then the or
expression is truthy. But if the second operand is falsy
, the whole or
expression is falsy. Recall that the previous two sentences apply to the case when the first operand is falsy.
That's why the second example above, 0 or ""
, returns the empty string, which is the second operand. An empty string is falsy—try bool("")
to confirm this. Any non-empty string is truthy.
So:
or
always evaluates to the first operand when the first operand is truthyor
always evaluates to the second operand when the first operand is falsy
But there's more to this…
Lazy Evaluation • Short Circuiting
Let's get back to the expression 5 or 0
. The or
looks at the first operand. It decides it's truthy, so its output is this first operand.
It never even looks at the second operand.
Do you want proof? Consider the following or
expression:
What's bizarre about this code at first sight? The expression int("hello")
is not valid since you can't convert the string "hello"
to an integer. Let's confirm this:
But the or
expression above, 5 or int("hello")
, didn't raise this error. Why?
Because Python never evaluated the second operand. Since the first operand, 5
, is truthy, Python decides to be lazy—it doesn't need to bother with the second operand. This is called short-circuit evaluation.
That's why 5 or int("hello")
doesn't raise the ValueError
you might expect from the second operand.
However, if the first operand is falsy, then Python needs to evaluate the second operand:
In this case, you get the ValueError
raised by the second operand.
Lazy is good (some will be pleased to read this). Python is being efficient when it evaluates expressions lazily. It saves time by avoiding the evaluation of expressions it doesn't need!
and
How about the and
keyword? The reasoning you need to use to understand and
is similar to the one you used above when reading about or
. But the logic is reversed. Let's try this out:
The and
keyword requires both operands to be truthy for the whole expression to be true (truthy). In the first example above, 5 and 0
, the first operand is truthy. Therefore, and
needs to also check the second operand. In fact, if the first operand in an and
expression is truthy, the second operand will determine the value of the whole expression.
When the first operand is truthy, and
always returns the second operand. In the first example, 5 and 0
, the second operand is 0
, which is falsy. So, the whole and
expression is falsy.
But in the second example, 5 and "hello"
, the second operand is "hello"
, which is truthy since it's a non-empty string. Therefore, the whole expression is truthy.
What do you think happens to the second operand when the first operand in an and
expression is falsy?
The first operand is falsy. It doesn't matter what the second operand is, since and
needs both operands to be truthy to evaluate to a truthy value.
And when the first operand in an and
expression is falsy, Python's lazy evaluation kicks in again. The second operand is never evaluated. You have a short-circuit evaluation:
Once again, you use the invalid expression int("hello")
as the second operand. This expression would raise an error when Python evaluates it. But, as you can see, the expression 0 and int("hello")
never raises this error since it never evaluates the second operand.
Let's summarise how and
works:
and
always evaluates to the first operand when the first operand is falsyand
always evaluates to the second operand when the first operand is truthy
Compare this to the bullet point summary for the or
expression earlier in this article.
Do you want to try video courses designed and delivered in the same style as these posts? You can get a free trial at The Python Coding Place and you also get access to a members-only forum.
More on Short-Circuiting
Here's code you may see that uses the or
expression’s short-circuiting behaviour:
Now, you're assigning the value of the or
expression to a variable name, person
. So, what will person
hold?
Let's try this out in two scenarios:
In the first example, you type your name when prompted. Or you can type my name, whatever you want! Therefore, the call to input()
returns a non-empty string, which is truthy. The or
expression evaluates to this first operand, which is the return value of the input()
call. So, person
is the string returned by input()
.
However, in the second example, you simply hit enter when prompted to type in a name. You leave the name field blank. In this case, input()
returns the empty string, ""
. And an empty string is falsy. Therefore, or
evaluates to the second operand, which is the string "Unknown"
. This string is assigned to person
.
Final Words
So, or
and and
don't always evaluate to a Boolean. They'll evaluate to one of their two operands, which can be any object—any data type. Since all objects in Python are either truthy or falsy, it doesn't matter that or
and and
don't return Booleans!
Now you know!
Do you want to join a forum to discuss Python further with other Pythonistas? Upgrade to a paid subscription here on The Python Coding Stack to get exclusive access to The Python Coding Place's members' forum. More Python. More discussions. More fun.
And you'll also be supporting this publication. I put plenty of time and effort into crafting each article. Your support will help me keep this content coming regularly and, importantly, will help keep it free for everyone.
Image by Paolo Trabattoni from Pixabay
Code in this article uses Python 3.13
The code images used in this article are created using Snappify. [Affiliate link]
You can also support this publication by making a one-off contribution of any amount you wish.
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Further reading related to this article’s topic:
Appendix: Code Blocks
Code Block #1
5 or 0
# 5
Code Block #2
"hello" or []
# 'hello'
Code Block #3
0 or 5
# 5
0 or ""
# ''
Code Block #4
5 or int("hello")
# 5
Code Block #5
int("hello")
# Traceback (most recent call last):
# File "<input>", line 1, in <module>
# ValueError: invalid literal for int() with base 10: 'hello'
Code Block #6
0 or int("hello")
# Traceback (most recent call last):
# File "<input>", line 1, in <module>
# ValueError: invalid literal for int() with base 10: 'hello'
Code Block #7
5 and 0
# 0
5 and "hello"
# 'hello'
Code Block #8
0 and 5
# 0
Code Block #9
0 and int("hello")
# 0
Code Block #10
person = input("Enter name: ") or "Unknown"
Code Block #11
person = input("Enter name: ") or "Unknown"
# Enter name: >? Stephen
person
# 'Stephen'
person = input("Enter name: ") or "Unknown"
# Enter name: >?
person
# 'Unknown'
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Talk Python to Me
#514: Python Language Summit 2025
Every year the core developers of Python convene in person to focus on high priority topics for CPython and beyond. This year they met at PyCon US 2025. Those meetings are closed door to keep focused and productive. But we're lucky that Seth Michael Larson was in attendance and wrote up each topic presented and the reactions and feedback to each. We'll be exploring this year's Language Summit with Seth. It's quite insightful to where Python is going and the pressing matters.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/seer'>Seer: AI Debugging, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/sentryagents'>Sentry AI Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading">Links from the show</h2> <div><strong>Seth on Mastodon</strong>: <a href="https://fosstodon.org/@sethmlarson" target="_blank" >@sethmlarson@fosstodon.org</a><br/> <strong>Seth on Twitter</strong>: <a href="https://twitter.com/sethmlarson?featured_on=talkpython" target="_blank" >@sethmlarson</a><br/> <strong>Seth on Github</strong>: <a href="https://github.com/sethmlarson?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Python Language Summit 2025</strong>: <a href="https://pyfound.blogspot.com/2025/06/python-language-summit-2025.html?featured_on=talkpython" target="_blank" >pyfound.blogspot.com</a><br/> <strong>WheelNext</strong>: <a href="https://wheelnext.dev/?featured_on=talkpython" target="_blank" >wheelnext.dev</a><br/> <strong>Free-Threaded Wheels</strong>: <a href="https://hugovk.github.io/free-threaded-wheels/?featured_on=talkpython" target="_blank" >hugovk.github.io</a><br/> <strong>Free-Threaded Python Compatibility Tracking</strong>: <a href="https://py-free-threading.github.io/tracking/?featured_on=talkpython" target="_blank" >py-free-threading.github.io</a><br/> <strong>PEP 779: Criteria for supported status for free-threaded Python</strong>: <a href="https://discuss.python.org/t/pep-779-criteria-for-supported-status-for-free-threaded-python/84319/123?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>PyPI Data</strong>: <a href="https://py-code.org/?featured_on=talkpython" target="_blank" >py-code.org</a><br/> <strong>Senior Engineer tries Vibe Coding</strong>: <a href="https://www.youtube.com/watch?v=_2C2CNmK7dQ&ab_channel=Programmersarealsohuman" target="_blank" >youtube.com</a><br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=t7Ov3ICo8Kc" target="_blank" >youtube.com</a><br/> <strong>Episode #514 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/514/python-language-summit-2025#takeaways-anchor" target="_blank" >talkpython.fm/514</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/514/python-language-summit-2025" target="_blank" >talkpython.fm</a><br/> <strong>Developer Rap Theme Song: Served in a Flask</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>--- Stay in touch with us ---</strong><br/> <strong>Subscribe to Talk Python on YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <strong>Talk Python on Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm at bsky.app</a><br/> <strong>Talk Python on Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes at bsky.app</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Matt Layman
Enhancing Chatbot State Management with LangGraph
Picture this: it’s late and I’m deep in a coding session, wrestling with a chatbot that’s starting to feel more like a living thing than a few lines of Python. Today’s mission? Supercharge the chatbot’s ability to remember and verify user details like names and birthdays using LangGraph. Let’s unpack the journey, from shell commands to Git commits, and see how this bot got a memory upgrade. For clarity, this is my adventure running through the LangGraph docs.
July 17, 2025
Wingware
Wing Python IDE Version 11.0.2 - July 17, 2025
Wing Python IDE version 11.0.2 is now available. It improves source code analysis, avoids duplicate evaluations of values in the Watch tool, fixes ruff as an external code checker in the Code Warnings tool, and makes a few other minor improvements.

Downloads
Wing 10 and earlier versions are not affected by installation of Wing 11 and may be installed and used independently. However, project files for Wing 10 and earlier are converted when opened by Wing 11 and should be saved under a new name, since Wing 11 projects cannot be opened by older versions of Wing.
New in Wing 11
Improved AI Assisted Development
Wing 11 improves the user interface for AI assisted development by introducing two separate tools AI Coder and AI Chat. AI Coder can be used to write, redesign, or extend code in the current editor. AI Chat can be used to ask about code or iterate in creating a design or new code without directly modifying the code in an editor.
Wing 11's AI assisted development features now support not just OpenAI but also Claude, Grok, Gemini, Perplexity, Mistral, Deepseek, and any other OpenAI completions API compatible AI provider.
This release also improves setting up AI request context, so that both automatically and manually selected and described context items may be paired with an AI request. AI request contexts can now be stored, optionally so they are shared by all projects, and may be used independently with different AI features.
AI requests can now also be stored in the current project or shared with all projects, and Wing comes preconfigured with a set of commonly used requests. In addition to changing code in the current editor, stored requests may create a new untitled file or run instead in AI Chat. Wing 11 also introduces options for changing code within an editor, including replacing code, commenting out code, or starting a diff/merge session to either accept or reject changes.
Wing 11 also supports using AI to generate commit messages based on the changes being committed to a revision control system.
You can now also configure multiple AI providers for easier access to different models.
For details see AI Assisted Development under Wing Manual in Wing 11's Help menu.
Package Management with uv
Wing Pro 11 adds support for the uv package manager in the New Project dialog and the Packages tool.
For details see Project Manager > Creating Projects > Creating Python Environments and Package Manager > Package Management with uv under Wing Manual in Wing 11's Help menu.
Improved Python Code Analysis
Wing 11 improves code analysis of literals such as dicts and sets, parametrized type aliases, typing.Self, type of variables on the def or class line that declares them, generic classes with [...], __all__ in *.pyi files, subscripts in typing.Type and similar, type aliases, and type hints in strings.
Updated Localizations
Wing 11 updates the German, French, and Russian localizations, and introduces a new experimental AI-generated Spanish localization. The Spanish localization and the new AI-generated strings in the French and Russian localizations may be accessed with the new User Interface > Include AI Translated Strings preference.
Improved diff/merge
Wing Pro 11 adds floating buttons directly between the editors to make navigating differences and merging easier, allows undoing previously merged changes, and does a better job managing scratch buffers, scroll locking, and sizing of merged ranges.
For details see Difference and Merge under Wing Manual in Wing 11's Help menu.
Other Minor Features and Improvements
Wing 11 also improves the custom key binding assignment user interface, adds a Files > Auto-Save Files When Wing Loses Focus preference, warns immediately when opening a project with an invalid Python Executable configuration, allows clearing recent menus, expands the set of available special environment variables for project configuration, and makes a number of other bug fixes and usability improvements.
Changes and Incompatibilities
Since Wing 11 replaced the AI tool with AI Coder and AI Chat, and AI configuration is completely different than in Wing 10, you will need to reconfigure your AI integration manually in Wing 11. This is done with Manage AI Providers in the AI menu. After adding the first provider configuration, Wing will set that provider as the default. You can switch between providers with Switch to Provider in the AI menu.
If you have questions, please don't hesitate to contact us at support@wingware.com.
July 16, 2025
Real Python
Python Scope and the LEGB Rule: Resolving Names in Your Code
The scope of a variable in Python determines where in your code that variable is visible and accessible. Python has four general scope levels: local, enclosing, global, and built-in. When searching for a name, Python goes through these scopes in order. It follows the LEGB rule, which stands for Local, Enclosing, Global, and Built-in.
Understanding how Python manages the scope of variables and names is a fundamental skill for you as a Python developer. It helps you avoid unexpected behavior and errors related to name collisions or referencing the wrong variable.
By the end of this tutorial, you’ll understand that:
- A scope in Python defines where a variable is accessible, following the local, enclosing, global, and built-in (LEGB) rule.
- A namespace is a dictionary that maps names to objects and determines their scope.
- The four scope levels—local, enclosing, global, and built-in—each control variable visibility in a specific context.
- Common scope-related built-in functions include
globals()
andlocals()
, which provide access to global and local namespaces.
To get the most out of this tutorial, you should be familiar with Python concepts like variables, functions, inner functions, exception handling, comprehensions, and classes.
Get Your Code: Click here to download the free sample code that you’ll use to learn about Python scope and the LEGB rule.
Understanding the Concept of Scope
In programming, the scope of a name defines the region of a program where you can unambiguously access that name, which could identify a variable, constant, function, class, or any other object. In most cases, you’ll only be able to access a name within its own scope or from an inner or nested scope.
Nearly all programming languages use the concept of scope to avoid name collisions and unpredictable behavior. Most often, you’ll distinguish between two main types of scope:
- Global scope: Names in this scope are available to all your code.
- Local scope: Names in this scope are only available or visible to the code within the scope.
Scope came about because early programming languages like BASIC only had global names. With this type of name, any part of the program could modify any variable at any time, making large programs difficult to maintain and debug. To work with global names, you’d need to keep all the code in mind to know what value a given name refers to at any time. This is a major side effect of not having scopes and relying solely on global names.
Modern languages, like Python, use the concept of variable scoping to avoid this kind of issue. When you use a language that implements scopes, you won’t be able to access all the names in a program from all locations. Instead, your ability to access a name depends on its scope.
Note: In this tutorial, you’ll be using the term name to refer to the identifiers of variables, constants, functions, classes, or any other object that can be assigned a name.
The names in your programs take on the scope of the code block in which you define them. When you can access a name from somewhere in your code, then the name is in scope. If you can’t access the name, then the name is out of scope.
Names and Scopes in Python
Because Python is a dynamically-typed language, its variables come into existence when you first assign them a value. Similarly, functions and classes are available after you define them using def
or class
, respectively. Finally, modules exist after you import them into your current scope.
You can create names in Python using any of the following operations:
Operation | Example |
---|---|
Assignment | variable = value |
Import | import module or from module import name |
Function definition | def func(): pass |
Function argument | func(value1, value2,..., valueN) |
Class definition | class DemoClass: pass |
These are all ways to assign a value to either a variable, constant, function, class, instance, or module. In each case, you end up with a name that has a specific scope. This scope will depend on where in your code you’ve defined the name at hand.
Note: There’s an important difference between assignment operations and reference or access operations. When you assign a name, you’re either creating that name or making it reference a different object. When you reference a name, you’re retrieving the value that the name points to.
Python uses the location of a name definition to associate it with a particular scope. In other words, the place in which you define a name in your code determines the scope or visibility of that name.
For example, if you define a name inside a function, then that name will have a local scope. You can only access the name locally within the function implementation. In contrast, if you define a name at the top level of a module, then that name will have a global scope. You’ll be able to access it from anywhere in your code.
Scope vs Namespace in Python
The concept of scope is closely related to the concept of namespace. A scope determines the visibility and lifetime of names, while a namespace provides the place where those names are stored.
Python implements namespaces as dictionaries that map names to objects. These dictionaries are the underlying mechanism that Python uses to store names under a specific scope. You can often access them through the .__dict__
attribute of the owning object.
Read the full article at https://realpython.com/python-scope-legb-rule/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Mike Driscoll
An Intro to Asciimatics – Another Python TUI Package
Text-based user interfaces (TUIs) have gained significant popularity in recent years. Even Rust has its own library called Ratatui after all. Python has several different TUI packages to choose from. One of those packages is called Asciimatics.
While Asciimatics is not as full-featured and slick as Textual is, you can do quite a bit with Asciimatics. In fact, there is a special kind of charm to the old-school flavor of the TUIs that you can create using Asciimatics.
In this tutorial, you will learn the basics of Asciimatics:
- Installation
- Creating a Hello World application
- Creating a form
The purpose of this tutorial is not to be exhaustive, but to give you a sense of how easy it is to create a user interface with Asciimatics. Be sure to read the complete documentation and check out their examples to learn more.
For now, let’s get started!
Installation
Asciimatics is a third-party Python package. What that means is that Asciimatics is not included with Python. You will need to install it. You should use a Python virtual environment for installing packages or creating new applications.
Whether you use the virtual environment or not, you can use pip to install Asciimatics:
python -m pip install asciimatics
Once Asciimatics is installed, you can proceed to creating a Hello World application.
Creating a Hello World Application
Creating a simple application is a concrete way to learn how to use an unfamiliar package. You will create a fun little application that “prints” out “Hello from Asciimatics” multiple times and in multiple colors.
Open up your favorite Python IDE or text editor and create a new file called hello_asciimatics.py
and then add the following code to it:
from random import randint from asciimatics.screen import Screen def hello(screen: Screen): while True: screen.print_at("Hello from ASCIIMatics", randint(0, screen.width), randint(0, screen.height), colour=randint(0, screen.colours - 1), bg=randint(0, screen.colours - 1) ) key = screen.get_key() if key in (ord("Q"), ord("q")): return screen.refresh() Screen.wrapper(hello)
This codfe takes in an Asciimatics Screen
object. You draw your text on the screen. In this case, you use the screen’s print_at()
method to draw the text. You use Python’s handy random
module to choose random coordinates in your terminal to draw the text as well as choose random foreground and background colors.
You run this inside an infinite loop. Since the loop runs indefinitely, the text will be drawn all over the screen and over the top of previous iterations of the text. What that means is that you should see the same text over and over again, getting written on top of previous versions of the text.
If the user presses the “Q” button on their keyboard, the application will break out of the loop and exit.
When you run this code, you should see something like this:
Isn’t that neat? Give it a try on your machine and verify that it works.
Now you are ready to create a form!
Creating a Form
When you want to ask the user for some information, you will usually use a form. You will find that this is true in web, mobile and desktop applications.
To make this work in Asciimatics, you will need to create a way to organize your widgets. To do that, you create a Layout
object. You will find that Asciimatics follow an hierarchy of Screen -> Scene -> Effects and then layouts and widgets.
All of this is kind of abstract though. So it make this easier to understand, you will write some code. Open up your Python IDE and create another new file. Name this new file ascii_form.py
and then add this code to it:
import sys from asciimatics.exceptions import StopApplication from asciimatics.scene import Scene from asciimatics.screen import Screen from asciimatics.widgets import Frame, Button, Layout, Text class Form(Frame): def __init__(self, screen): super().__init__(screen, screen.height * 2 // 3, screen.width * 2 // 3, hover_focus=True, can_scroll=False, title="Contact Details", reduce_cpu=True) layout = Layout([100], fill_frame=True) self.add_layout(layout) layout.add_widget(Text("Name:", "name")) layout.add_widget(Text("Address:", "address")) layout.add_widget(Text("Phone number:", "phone")) layout.add_widget(Text("Email address:", "email")) button_layout = Layout([1, 1, 1, 1]) self.add_layout(button_layout) button_layout.add_widget(Button("OK", self.on_ok), 0) button_layout.add_widget(Button("Cancel", self.on_cancel), 3) self.fix() def on_ok(self): print("User pressed OK") def on_cancel(self): sys.exit(0) raise StopApplication("User pressed cancel. Quitting!") def main(screen: Screen): while True: scenes = [ Scene([Form(screen)], -1, name="Main Form") ] screen.play(scenes, stop_on_resize=True, start_scene=scenes[0], allow_int=True) Screen.wrapper(main, catch_interrupt=True)
The Form
is a subclass of Frame
which is an Effect
in Asciimatics. In this case, you can think of the frame as a kind of window or dialog within your terminal.
The frame will contain your form. Within the frame, you create a Layout
object and you tell it to fill the frame. Next you add the widgets to the layout, which will add the widgets vertically, from top to bottom.
Then you create a second layout to hold two buttons: “OK” and “Cancel”. The second layout is defined as having four columns with a size of one. You will then add the buttons and specify which column the button should be put in.
To show the frame to the user, you add the frame to a Scene
and then you play()
it.
When you run this code, you should see something like the following:
Pretty neat, eh?
Now this example is great for demonstrating how to create a more complex user interface, but it doesn’t show how to get the data from the user as you haven’t written any code to grab the contents of the Text
widgets. However, you did show that when you created the buttons, you can bind them to specific methods that get called when the user clicks on those buttons.
Wrapping Up
Asciimatics makes creating simple and complex applications for your terminal easy. However, the applications have a distincly retro-look to them that is reminiscent to the 1980’s or even earlier. The applications are appealing in their own way, though.
This tutorial only scratches the surface of Asciimatics. For full details, you should check out their documentation.
If you wamt to create a more modern looking user interface, you might want to check out Textual instead.
Related Reading
Want to learn how to create TUIs the modern way? Check out my book: Creating TUI Applications with Textual and Python.
Available at the following:
The post An Intro to Asciimatics – Another Python TUI Package appeared first on Mouse Vs Python.
Python Software Foundation
Affirm Your PSF Membership Voting Status
Every PSF voting-eligible Member (Supporting, Contributing, and Fellow) needs to affirm their membership to vote in this year’s election.
If you wish to vote in this year’s PSF Board election, you must affirm your intention to vote no later than Tuesday, August 26th, 2:00 pm UTC. This year’s Board Election vote begins Tuesday, September 2nd, 2:00 pm UTC, and closes on Tuesday, September 16th, 2:00 pm UTC.
You should have received an email from "psf@psfmember.org <Python Software Foundation>" with the subject "[Action Required] Affirm your PSF Membership voting intention for 2025 PSF Board Election" that contains information on how to affirm your voting status. If you were expecting to receive the email but have not (make sure to check your spam!), please email psf-elections@pyfound.org, and we’ll assist you. Please note: If you opted out of emails related to your membership, you did not receive this email.
Need to check your membership status?
Log on to psfmember.org and visit your PSF Member User Information page to see your membership record and status. If you are a voting-eligible member (active Supporting, Contributing, and Fellow members of the PSF) and do not already have a login, please create an account on psfmember.org and then email psf-elections@pyfound.org so we can link your membership to your account. Please ensure you have an account linked to your membership so that we can have the most up-to-date contact information for you in the future.
How to affirm your intention to vote
You can affirm your voting intention by following the steps in our video tutorial:
- Log in to psfmember.org
- Check your eligibility to vote (You must be a Contributing, Supporting, or Fellow member)
- Choose “Voting Affirmation” at the top right
- Select your preferred intention for voting in 2025
- Click the “Submit” button
PSF Bylaws
Section 4.2 of the PSF Bylaws requires that “Members of any membership class with voting rights must affirm each year to the corporation in writing that such member intends to be a voting member for such year.”
Our motivation is to ensure that our elections can meet quorum as required by Section 3.9 of our bylaws. As our membership has grown, we have seen that an increasing number of Contributing and Fellow members with indefinite membership do not engage with our annual election, making quorum difficult to reach.
An election that does not reach quorum is invalid. This would cause the whole voting process to be re-held, resulting in fewer voters and an undue amount of effort on the part of PSF Staff.
Recent updates to membership and voting
If you were formerly a Managing member, your membership has been updated to Contributing as of June 25th, 2025, per last year’s Bylaw change that merged Managing and Contributing memberships.
Per another recent Bylaw change that allows for simplifying the voter affirmation process by treating past voting activity as intent to continue voting, if you voted last year, you will automatically be added to the 2025 voter roll. Please note: If you removed or changed your email on psfmember.org, you may not automatically be added to this year's voter roll.
What happens next?
You’ll get an email from OpaVote with a ballot on or right before September 2nd, and then you can vote!
Check out our PSF Membership page to learn more. If you have questions about membership, nominations, or this year’s Board election, please email psf-elections@pyfound.org or join the PSF Discord for the upcoming Board Office Hours on August 12th, 9 PM UTC. You are also welcome to join the discussion about the PSF Board election on our forum.
July 15, 2025
death and gravity
Inheritance over composition, sometimes
In ProcessThreadPoolExecutor: when I/O becomes CPU-bound, we built a hybrid concurrent.futures executor that runs tasks in multiple threads on all available CPUs, bypassing Python's global interpreter lock.
Here's some interesting reader feedback:
Currently, the code is complex due to subclassing and many layers of delegation. Could this solution be implemented using only functions, no classes? Intuitively I feel classes would be hell to debug.
Since a lot of advanced beginners struggle with structuring code, we'll implement the same executor using inheritance, composition, and functions only, compare the solutions, and reach some interesting conclusions. Consider this a worked example.
Note
Today we're focusing on code structure. While not required, reading the original article will give you a better idea of why the code does what it does.
Requirements #
Before we delve into the code, we should have some understanding of what we're building. The orginal article sets out the following functional requirements:
- Implement the Executor interface; we want a drop-in replacement for existing concurrent.futures executors, so that user code doesn't have to change.
- Spread the work to one worker process per CPU, and then further to multiple threads inside each worker, to work around CPU becoming a bottleneck for I/O.
Additionally, we have two implicit non-functional requirements:
- Use the existing executors where possible (less code means fewer bugs).
- Only depend on stable, documented features; we don't want our code to break when concurrent.futures internals change.
concurrent.futures #
Since we're building on top of concurrent.futures, we should also get familiar with it; the docs already provide a great introduction:
The concurrent.futures module provides a high-level interface for asynchronously executing callables. [...this] can be performed with threads, using ThreadPoolExecutor, or separate processes, using ProcessPoolExecutor. Both implement the same interface, which is defined by the abstract Executor class.
Let's look at the classes in more detail.
Executor is an abstract base class1 defined in concurrent.futures._base. It provides dummy submit() and shutdown() methods, a concrete map() method implemented in terms of submit(), and context manager methods that shutdown() the executor on exit. Notably, the documentation does not mention the concrete methods, instead saying that the class "should not be used directly, but through its concrete subclasses".
The first subclass, ThreadPoolExecutor, is defined in concurrent.futures.thread; it implements submit() and shutdown(), inheriting map() unchanged.
The second one, ProcessPoolExecutor, is defined in concurrent.futures.process; as an optimization, it overrides map() to chop the input iterables and pass the chunks to the superclass method with super().
Three solutions #
Now we're ready for code.
Inheritance #
First, the original implementation,2 arguably a textbook example of inheritance.
We override __init__
, submit(), and shutdown(),
and do some extra stuff on top of the inherited behavior,
which we access through super().
We inherit
the context manager methods,
map(),
and any public methods ProcessPoolExecutor may get in the future,
assuming they use only other public methods
(more on this below).
class ProcessThreadPoolExecutor(concurrent.futures.ProcessPoolExecutor):
def __init__(self, max_threads=None, initializer=None, initargs=()):
self.__result_queue = multiprocessing.Queue()
super().__init__(
initializer=_init_process,
initargs=(self.__result_queue, max_threads, initializer, initargs)
)
self.__tasks = {}
self.__result_handler = threading.Thread(target=self.__handle_results)
self.__result_handler.start()
def submit(self, fn, *args, **kwargs):
outer = concurrent.futures.Future()
task_id = id(outer)
self.__tasks[task_id] = outer
outer.set_running_or_notify_cancel()
inner = super().submit(_submit, task_id, fn, *args, **kwargs)
return outer
def __handle_results(self):
for task_id, ok, result in iter(self.__result_queue.get, None):
outer = self.__tasks.pop(task_id)
if ok:
outer.set_result(result)
else:
outer.set_exception(result)
def shutdown(self, wait=True):
super().shutdown(wait=wait)
if self.__result_queue:
self.__result_queue.put(None)
if wait:
self.__result_handler.join()
self.__result_queue.close()
self.__result_queue = None
Because we're subclassing a class with private, undocumented attributes, our private attributes have to start with double underscores to avoid clashes with superclass ones (such as _result_queue).
In addition to the main class, there are some global functions used in the worker processes which remain unchanged regardless of the solution:
# this code runs in each worker process
_executor = None
_result_queue = None
def _init_process(queue, max_threads, initializer, initargs):
global _executor, _result_queue
_executor = concurrent.futures.ThreadPoolExecutor(max_threads)
_result_queue = queue
if initializer:
initializer(*initargs)
def _submit(task_id, fn, *args, **kwargs):
task = _executor.submit(fn, *args, **kwargs)
task.task_id = task_id
task.add_done_callback(_put_result)
def _put_result(task):
if exception := task.exception():
_result_queue.put((task.task_id, False, exception))
else:
_result_queue.put((task.task_id, True, task.result()))
Composition #
OK, now let's use composition –
instead of being a ProcessPoolExecutor,
our ProcessThreadPoolExecutor has one.
At a first glance,
the result is the same as before,
with super()
changed to self._inner
:
class ProcessThreadPoolExecutor:
def __init__(self, max_threads=None, initializer=None, initargs=()):
self._result_queue = multiprocessing.Queue()
self._inner = concurrent.futures.ProcessPoolExecutor(
initializer=_init_process,
initargs=(self._result_queue, max_threads, initializer, initargs)
)
self._tasks = {}
self._result_handler = threading.Thread(target=self._handle_results)
self._result_handler.start()
def submit(self, fn, *args, **kwargs):
outer = concurrent.futures.Future()
task_id = id(outer)
self._tasks[task_id] = outer
outer.set_running_or_notify_cancel()
inner = self._inner.submit(_submit, task_id, fn, *args, **kwargs)
return outer
def _handle_results(self):
for task_id, ok, result in iter(self._result_queue.get, None):
outer = self._tasks.pop(task_id)
if ok:
outer.set_result(result)
else:
outer.set_exception(result)
def shutdown(self, wait=True):
self._inner.shutdown(wait=wait)
if self._result_queue:
self._result_queue.put(None)
if wait:
self._result_handler.join()
self._result_queue.close()
self._result_queue = None
Except, we need to implement the context manager protocol ourselves:
def __enter__(self):
# concurrent.futures._base.Executor.__enter__
return self
def __exit__(self, exc_type, exc_val, exc_tb):
# concurrent.futures._base.Executor.__exit__
self.shutdown(wait=True)
return False
...and we need to copy map()
from Executor,
since it should use our submit()
:
def _map(self, fn, *iterables, timeout=None, chunksize=1):
# concurrent.futures._base.Executor.map
if timeout is not None:
end_time = timeout + time.monotonic()
fs = [self.submit(fn, *args) for args in zip(*iterables)]
def result_iterator():
try:
fs.reverse()
while fs:
if timeout is None:
yield _result_or_cancel(fs.pop())
else:
yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
finally:
for future in fs:
future.cancel()
return result_iterator()
...and the chunksize
optimization from its ProcessPoolExecutor version:
def map(self, fn, *iterables, timeout=None, chunksize=1):
# concurrent.futures.process.ProcessPoolExecutor.map
if chunksize < 1:
raise ValueError("chunksize must be >= 1.")
results = self._map(partial(_process_chunk, fn),
itertools.batched(zip(*iterables), chunksize),
timeout=timeout)
return _chain_from_iterable_of_lists(results)
def _result_or_cancel(fut, timeout=None):
# concurrent.futures._base._result_or_cancel
try:
try:
return fut.result(timeout)
finally:
fut.cancel()
finally:
del fut
def _process_chunk(fn, chunk):
# concurrent.futures.process._process_chunk
return [fn(*args) for args in chunk]
def _chain_from_iterable_of_lists(iterable):
# concurrent.futures.process._chain_from_iterable_of_lists
for element in iterable:
element.reverse()
while element:
yield element.pop()
And, when the Executor interface gets new methods, we'll need to at least forward them to the inner executor, although we may have to copy those too.
On the upside, no base class means we can name attributes however we want.
But this is Python, why do we need to copy stuff? In Python, methods are just functions, so we could almost get away with this:
class ProcessThreadPoolExecutor:
... # __init__, submit(), and shutdown() just as before
__enter__ = ProcessPoolExecutor.__enter__
__exit__ = ProcessPoolExecutor.__exit__
map = ProcessPoolExecutor.map
Alas, it won't work –
ProcessPoolExecutor map()
calls super().map()
,
and object,
the superclass of our executor,
has no such method,
which is why we had to change it to self._map()
in our copy in the first place.
Functions #
Can this be done using only functions, though?
Theoretically no, since we need to implement the executor interface. Practically yes, since this is Python, where an "interface" just means having specific attributes, usually functions with specific signatures. For example, a module like this:
def init(max_threads=None, initializer=None, initargs=()):
global _result_queue, _inner, _tasks, _result_handler
_result_queue = multiprocessing.Queue()
_inner = concurrent.futures.ProcessPoolExecutor(
initializer=_init_process,
initargs=(_result_queue, max_threads, initializer, initargs)
)
_tasks = {}
_result_handler = threading.Thread(target=_handle_results)
_result_handler.start()
def submit(fn, *args, **kwargs):
outer = concurrent.futures.Future()
task_id = id(outer)
_tasks[task_id] = outer
outer.set_running_or_notify_cancel()
inner = _inner.submit(_submit, task_id, fn, *args, **kwargs)
return outer
def _handle_results():
for task_id, ok, result in iter(_result_queue.get, None):
outer = _tasks.pop(task_id)
if ok:
outer.set_result(result)
else:
outer.set_exception(result)
def shutdown(wait=True):
global _result_queue
_inner.shutdown(wait=wait)
if _result_queue:
_result_queue.put(None)
if wait:
_result_handler.join()
_result_queue.close()
_result_queue = None
map()
with minor tweaks.
def _map(fn, *iterables, timeout=None, chunksize=1):
# concurrent.futures._base.Executor.map
if timeout is not None:
end_time = timeout + time.monotonic()
fs = [submit(fn, *args) for args in zip(*iterables)]
def result_iterator():
try:
fs.reverse()
while fs:
if timeout is None:
yield _result_or_cancel(fs.pop())
else:
yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
finally:
for future in fs:
future.cancel()
return result_iterator()
def map(fn, *iterables, timeout=None, chunksize=1):
# concurrent.futures.process.ProcessPoolExecutor.map
if chunksize < 1:
raise ValueError("chunksize must be >= 1.")
results = _map(partial(_process_chunk, fn),
itertools.batched(zip(*iterables), chunksize),
timeout=timeout)
return _chain_from_iterable_of_lists(results)
Behold, we can use the module itself as an executor:
>>> ptpe.init()
>>> ptpe.submit(int, '1').result()
1
Of note,
everything that was an instance variable before
is now a global variable;
as a consequence,
only one executor can exist at any given time,
since there's only the one module.3
But it gets worse – calling init()
a second time
will clobber the state of the first executor,
leading to all sorts of bugs;
if we were serious,
we'd prevent it somehow.
Also, some interfaces are more complicated than having the right functions;
defining __enter__
and __exit__
is not enough to use a module in a with
statement, since
the interpreter looks them up on the class of the object,
not on the object itself.
We can work around this with
an alternate "constructor"
that returns a context manager:
@contextmanager
def init_cm(*args, **kwargs):
init(*args, **kwargs)
try:
yield sys.modules[__name__]
finally:
shutdown()
>>> with ptpe.init_cm() as executor:
... assert executor is ptpe
... ptpe.submit(int, '2').result()
...
2
Comparison #
So, how do the solutions stack up? Here's a summary:
pros | cons | |
---|---|---|
inheritance |
|
|
composition |
|
|
functions | ? |
|
I may be a bit biased, but inheritance looks like a clear winner.
Composition over inheritance #
Given that favoring composition over inheritance is usually a good practice, it's worth discussing why inheritance won this time. I see three reasons:
- Composition helps most when you have unrelated components that need to be flexible in response to an evolving business domain; that's not the case here, so we get all the drawbacks with none of the benefits.
- The existing code is designed for inheritance.
- We have a true is-a relationship – ProcessThreadPoolExecutor really is a ProcessPoolExecutor with extra behavior, and not just part of an arbitrary hierarchy.
For a different line of reasoning involving subtyping, check out Hillel Wayne's When to prefer inheritance to composition; he offers this rule of thumb:
So, here's when you want to use inheritance: when you need to instantiate both the parent and child classes and pass them to the same functions.
Forward compatibility #
The inheritance solution assumes map() and any future public ProcessPoolExecutor methods are implemented only in terms of other public methods. This assumption introduces a risk that updates may break our executor; this is lowered by two things:
- concurrent.futures is in the standard library, which rarely does major rewrites of existing code, and never within a minor (X.Y) version; concurrent.futures exists in its current form since Python 3.2, released in 2011.
- concurrent.futures is clearly designed for inheritance, even if mainly to enable internal reuse, and not explicitly documented.
As active mitigations, we can add a basic test suite (which we should do anyway), and document the supported Python versions explicitly (which we should do anyway if we were to release this on PyPI).
If concurrent.futures were not in the standard library, I'd probably go with the composition version instead, although as already mentioned, this wouldn't be free from upkeep either. Another option would be to upstream ProcessThreadPoolExecutor, so that it is maintained together with the code it depends on.
Global state #
The functions-only solution is probably the worst of the three, since it has all the downsides of composition, and significant limitations due to its use of global state.
We could avoid using globals
by passing the state
(process pool executor instance, result queue, etc.)
as function arguments,
but this breaks the executor interface,
and makes for an awful user experience.
We could group common arguments into a single object
so there's only one argument to pass around;
if you call that argument self
,
it becomes obvious that's just a class instance with extra steps.
Having to keep track of a bunch of related globals has enough downsides that even if you do want a module-level API, it's still worth using a class to group them, and exposing the methods of a global instance at module-level (like so); Brandon Rhodes discusses this at length in The Prebound Method Pattern.
Complexity #
While the code is somewhat complex, that's mostly intrinsic to the problem itself (what runs in the main vs. worker processes, passing results around, error handling, and so on), rather than due to our of use classes, which only affects how we refer to ProcessPoolExecutor methods and how we store state.
One could argue that copying a bunch of code doesn't increase complexity, but if you factor in keeping it up to date and tested, it's not exactly free either.
One could also argue that building our executor on top of ProcessPoolExecutor is increasing complexity, and in a way that's true – for example, we have two result queues and had to deal with dead workers too, which wouldn't be the case if we wrote it from scratch; but in turn, that would come with having to understand, maintain, and test 800+ lines of code of low level process management code. Sometimes, complexity I have to care about is more important that total complexity.
Debugging #
I have to come clean at this point – I use print debugging a lot 🙀 (especially if there are no tests yet, and sometimes from tests too); when that doesn't cut it, IPython's embed() usually provides enough interactivity to figure out what's going on.4
With the minimal test at the end of the file
driving the executor,
I used temporary print() calls
in _submit()
, _put_result()
, and __handle_results()
to check data is making its way through properly;
if I expected the code to change more often,
I'd replace them with permanent logging calls.
In addition,
there were two debugging scripts
in the benchmark file
that I didn't show,
one to automate killing workers at the right time,
and one to make sure shutdown()
waits any pending tasks.
So, does how we wrote the code change any of this? Not really, no; all the techniques above (and using a debugger too) apply equally well. If anything, using classes makes interactive debugging easier, since it's easier to discover state via autocomplete (with functions only, you have to know to look it up on the module).
Try it out #
As I've said before, try it out – it only took ~10 minutes to convert the initial solution to the other two. In part, the right code structure is a matter feeling and taste, and both are educated by reading and writing lots of code. If you think there's a better way to do something, do it and see how it looks; it is a sort of deliberate practice.
Learned something new today? Share this with others, it really helps!
Want to know when new articles come out? Subscribe here to get new stuff straight to your inbox!
Executor is an abstract base class only by convention: it is a base class (other classes are supposed to subclass it), and it is abstract (other classes are supposed to provide concrete implementations for some methods).
Python also allows formalizing abstract base classes using the abc module; see When to use classes in Python? When you repeat similar sets of functions for an example of this and other ways of achieving the same goal. [return]
For brevity, I'm using the version before dealing with dead workers; the final code is similar, but with a more involved
__handle_results
. [return]This is almost true – we could "this is Python" our way deeper and reload the module while still keeping a reference to the old one, but that's just a round-about, unholy way of emulating class instances. [return]
Pro tip: you can use embed() as a breakpoint() hook:
PYTHONBREAKPOINT=IPython.embed python myscript.py
. [return]
PyCoder’s Weekly
Issue #690: JIT, __init__, dis, and That's Not It (July 15, 2025)
#690 – JULY 15, 2025
View in Browser »
Reflections on 2 Years of CPython’s JIT Compiler
Ken is one of the contributors to CPython’s JIT compiler. This retrospective talks about what is going well and what Ken thinks could be better with the JIT.
KEN JIN
What Is Python’s __init__.
py For?
Learn to declare packages with Python’s __init__
.py, set package variables, simplify imports, and understand what happens if this module is missing.
REAL PYTHON
[Live Event] Debugging AI Applications with Sentry
Join the Sentry team for the latest Sentry Build workshop on Debugging with Sentry AI using Seer, MCP, and Agent Monitoring. In this hands-on session, you’ll learn how to debug AI-integrated applications and agents with full-stack visibility. Join live on July 23rd →
SENTRY sponsor
Disassembling Python Code Using the dis
Module
Look behind the scenes to see what happens when you run your Python (CPython) code by using the tools in the dis
module.
THEPYTHONCODINGSTACK.COM
Articles & Tutorials
Run Coverage on Tests
Code coverage tools tell you just what parts of your programs got executed during test runs. They’re an important part of your test suite, without them you may miss errors in your tests themselves. This post has two quick examples of just why you should use a coverage tool.
HUGO VAN KEMENADE
Python Software Foundation Bylaws Change
To comply with a variety of data privacy laws in the EU, UK, and California, the PSF is updating section 3.8 of the bylaws which formerly allowed any voting member to request a list of all members’ names and email addresses.
PYTHON SOFTWARE FOUNDATION
Happy 20th Birthday Django!
July 13th was the 20th anniversary of the first public commit to the Django code repository. In celebration, Simon has reposted his talk from the 10th anniversary on the history of the project.
SIMON WILLISON
330× Faster: Four Different Ways to Speed Up Your Code
There are many approaches to speeding up Python code; applying multiple approaches can make your code even faster. This post talks about four different ways you can achieve speed-up.
ITAMAR TURNER-TRAURING
Thinking About Running for the PSF Board? Let’s Talk!
It is that time of year, the PSF board elections are starting. If you’re thinking about running or want to know more, consider attending the office hours session on August 12th.
PYTHON SOFTWARE FOUNDATION
How Global Variables Work in Python Bytecode
To better understand how Python handles globals, this article walks through dynamic name resolution, the global store, and how monkey patching works at the bytecode level.
FROMSCRATCHCODE.COM • Shared by Tyler Green
Building a JIT Compiler for CPython
Talk Python To Me interviews Brandt Bucher and they talk about the upcoming JIT compiler for Python and how it is different than JITs in other languages.
KENNEDY & BUCHER podcast
International Travel to DjangoCon US 2025
DjangoCon US is in Chicago on September 8-12. If you’re travelling there from outside the US, this article has details that may be helpful to you.
DJANGOCON US
Using DuckDB With Pandas, Parquet, and SQL
Learn about DuckDB’s in-process architecture and SQL capabilities which can enhance performance and simplify data handling.
KHUYEN TRAN • Shared by Ben Portz
Exploring Protocols in Python
Learn how Python’s protocols improve your use of type hints and static type checkers in this practical video course.
REAL PYTHON course
How to Use MongoDB in Python Flask
This article explores the benefits of MongoDB and how to use it in a Flask application.
FEDERICO TROTTA • Shared by AppSignal
Open Source Security Work Isn’t “Special”
Seth gave a keynote talk at the OpenSSF Community Day NA and spoke about how in many open source projects security is thought of in isolation and it can be overwhelming to maintainers. This post from Seth is a summary of the talk and proposes changes to how we approach the security problem in open source.
SETH LARSON
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
July 16, 2025
REALPYTHON.COM
PyData Bristol Meetup
July 17, 2025
MEETUP.COM
PyLadies Dublin
July 17, 2025
PYLADIES.COM
Chattanooga Python User Group
July 18 to July 19, 2025
MEETUP.COM
IndyPy X IndyAWS: Python-Powered Cloud
July 22 to July 23, 2025
MEETUP.COM
PyOhio 2025
July 26 to July 28, 2025
PYOHIO.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #690.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Mike Driscoll
Creating TUI Applications with Textual and Python is Released
Learn how to create text-based user interfaces (TUIs) using Python and the amazing Textual package.
Textual is a rapid application development framework for your terminal or web browser. You can build complex, sophisticated applications in your terminal. While terminal applications are text-based rather than pixel-based, they still provide fantastic user interfaces.
The Textual package allows you to create widgets in your terminal that mimic those used in a web or GUI application.
Creating TUI Applications with Textual and Python is to teach you how to use Textual to make striking applications of your own. The book’s first half will teach you everything you need to know to develop a terminal application.
The book’s second half has many small applications you will learn how to create. Each chapter also includes challenges to complete to help cement what you learn or give you ideas for continued learning.
Here are some of the applications you will create:
- A basic calculator
- A CSV viewer
- A Text Editor
- An MP3 player
- An ID3 Editor
- A Weather application
- A TUI for pre-commit
- RSS Reader
Where to Buy
You can purchase Creating TUI Applications with Textual and Python on the following websites:
Calculator
CSV Viewer
MP3 Player
Weather App
The post Creating TUI Applications with Textual and Python is Released appeared first on Mouse Vs Python.
Ruslan Spivak
Book Notes: The Dark Art of Linear Algebra by Seth Braver — Chapter 1 Review
“Mathematics is the art of reducing any problem to linear algebra.” — William Stein
If you’ve ever looked at a vector and thought, “Just a column of numbers, right?”, this chapter will change that. The Dark Art of Linear Algebra (aka DALA) by Seth Braver opens with one of the clearest intros I’ve read. Not every part clicks on the first pass, but the effort pays off. Paired with the author’s videos, this is a strong starting point whether you’re learning math for the first time or coming back to it with purpose.
As I wrote in Unlocking AI with Math and [Book Notes] Infinitesimals, Derivatives, and Beer – Full Frontal Calculus (Ch. 1), I’m not learning math to pass a test. I’m learning it to understand the machinery behind AI and robotics, and eventually build machines of my own. (That would be fun, right?)
That goal needs a solid grasp of linear algebra. And it starts with understanding what a vector really is. Not just how to work with vectors algebraically, but how they behave in space and fit into a larger structure.
This chapter helped me sharpen that understanding.
Chapter Notes
What’s a Vector?
The book makes it clear that the answer to this question will evolve as you go deeper into linear algebra. But Chapter 1 starts simple: a vector is an arrow. A geometric object. A displacement.
In the video that comes with the chapter, the author even says to forget everything you think you know about vectors. He introduces them geometrically, which makes them feel tangible and helps you see familiar algebraic ideas in a visual, spatial way.
Vector Addition
The book introduces vector addition visually. Once you see vectors as displacements or moves through space, the addition feels natural. Almost obvious.
Image source: DALA Ch1
The text doesn’t focus on vector subtraction, but there’s an exercise on it. The companion video shows two methods. One of them is subtraction by addition: flip the direction of the vector you want to subtract, then add. It reminded me of that Office scene where Andy says “addition by subtraction,” and Michael asks, “What does that even mean?” In that context, it’s just a throwaway phrase. But in vector math, subtraction by addition is a real method. Flip the vector, then add. If you’ve done engineering, you’ve likely seen this before.
Vector addition also follows familiar rules like commutativity and associativity. If those sound fuzzy, the book and video prove them using triangles and parallelograms. No heavy algebra, just geometry.
One nice bonus is that the commutative proof gives you another way to add vectors. Place both tails at the same point, draw a parallelogram, and the diagonal gives the sum. Itss clean and easy to visualize:
Stretching Vectors
Scalar multiplication is introduced as a way to stretch, shrink, or flip a vector, not just multiply its components.
The author even explains where the word scalar comes from. Numbers are called scalars because they scale vectors. I liked that he doesn’t assume you already know this.
To stretch a vector, multiply by 3.
To flip it, multiply by –1.
To collapse it, multiply by 0.
It’s easier to remember when you learn it by drawing instead of just computing.
Standard Basis Vectors
Only after you’ve built a solid geometric understanding does the author introduce the standard basis vectors: i, j, and k. By then, it’s clear that 2i + 3j + 5k is just a weighted sum of familiar directions.
The chapter shows how to express vectors in ℝ² and ℝ³ using these basis vectors, and how to rewrite them in column form.
Length of Vectors
Be sure to watch the videos that go with this chapter. They walk you through finding the length of a vector visually.
You’ll start with the Pythagorean theorem to calculate the length of a vector in ℝ³, then extend the idea to ℝⁿ. The chapter also proves the general length formula when a vector is written in Cartesian coordinates. Neat.
The Dot Product
The chapter defines the dot product using the same geometric approach as earlier sections, and it makes sense. But for me, it really clicked in the physics example where work is defined using the dot product. The author’s video made it even clearer.
In the screenshot above, I underlined “Thus we see that work, viewed in a more general setting, is simply a dot product” and scribbled “watch the video” in the margin. Just a reminder that the video is a great companion to the chapter.
The text then walks through key properties: commutativity, dotting a vector with itself, the distributive property, a test for perpendicularity, and how to compute the dot product in ℝ².
You could memorize the formula. But it’s much more satisfying to understand the parts and derive it from scratch. Like Einstein said, “Any fool can know. The point is to understand.”
Here’s a step-by-step derivation, written out in my notes:
Thoughts and Tips
Like Full Frontal Calculus did for derivatives, this chapter tears vectors down to the basics and builds them back up. It does that visually, intuitively, and from first principles. It starts with geometry, not formulas. By the end, it’s clear that coordinates are just a way to describe vectors. They are not the vectors themselves.
Verdict: Highly recommend if you want a clear, visual grasp of what vectors really are. Especially if linear algebra has ever felt abstract, dry, or overly symbolic.
If you plan to read the chapter, these tips helped me get the most out of it:
-
Read slowly. Then read slowly again. The material is clear, but it rewards focused attention. Grab a paperback if you can. Write in the margins. Make the book your own.
-
Watch the author’s YouTube videos. The book explains the idea. The video often makes it stick. If you’re reading any of Braver’s math books, don’t skip the videos. They’re short, clear, and worth it.
-
Don’t worry about the proofs. They’re explained in plain language, supported by visuals, and still rigorous. You don’t need a separate book on how to follow them. They just make sense.
-
Brush up on your trig. Knowing how cosine works pays off when finding angles between vectors. It’s a small part of the chapter, but if you’re rusty, check out the trig section in Precalculus Made Difficult by the same author.
-
Do the exercises. The book includes answers, which makes it great for self-study. But like in Full Frontal Calculus, the solutions are compact. Use ChatGPT or Grok (xAI) to expand on them when needed.
-
Use spaced repetition. For ideas that are hard to keep in memory, try active recall. I use Anki, but any similar tool should work.
-
Check out the book sample. The author offers a sample on his site. If you’re on the fence, it gives you a solid feel for the writing and style.
These pages and videos are exactly what I wish I had the first time I saw vectors. They make the concept click and give you a foundation you can build on, whether you’re starting fresh or coming back to review.
More to come. Stay tuned.
Originally published in my newsletter Beyond Basics. If you’d like to get future posts like this by email, you can subscribe here.
P.S. I’m not affiliated with the author. I just really enjoy his books and wanted to share that.
Real Python
Getting Started With marimo Notebooks
marimo notebooks redefine the notebook experience by offering a reactive environment that addresses the limitations of traditional linear notebooks. With marimo, you can seamlessly reproduce and share content while benefiting from automatic cell updates and a correct execution order. Discover how marimo’s features make it an ideal tool for documenting research and learning activities.
By the end of this video course, you’ll understand that:
- marimo notebooks automatically update dependent cells, ensuring consistent results across your work.
- Reactivity allows marimo to determine the correct running order of cells using a directed acyclic graph (DAG).
- Sandboxing in marimo creates isolated environments for notebooks, preventing package conflicts and ensuring reproducibility.
- You can add interactivity to marimo notebooks with UI elements like sliders and radio buttons.
- Traditional linear notebooks have inherent flaws, such as hidden state issues, that marimo addresses with its reactive design.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Ned Batchelder
2048: iterators and iterables
I wrote a low-tech terminal-based version of the classic 2048 game and had some interesting difficulties with iterators along the way.
2048 has a 4×4 grid with sliding tiles. Because the tiles can slide left or right and up or down, sometimes we want to loop over the rows and columns from 0 to 3, and sometimes from 3 to 0. My first attempt looked like this:
N = 4
if sliding_right:
cols = range(N-1, -1, -1) # 3 2 1 0
else:
cols = range(N) # 0 1 2 3
if sliding_down:
rows = range(N-1, -1, -1) # 3 2 1 0
else:
rows = range(N) # 0 1 2 3
for row in rows:
for col in cols:
...
This worked, but those counting-down ranges are ugly. Let’s make it nicer:
cols = range(N) # 0 1 2 3
if sliding_right:
cols = reversed(cols) # 3 2 1 0
rows = range(N) # 0 1 2 3
if sliding_down:
rows = reversed(rows) # 3 2 1 0
for row in rows:
for col in cols:
...
Looks cleaner, but it doesn’t work! Can you see why? It took me a bit of debugging to see the light.
range()
produces an iterable: something that can be iterated over.
Similar but different is that reversed()
produces an iterator: something
that is already iterating. Some iterables (like ranges) can be used more than
once, creating a new iterator each time. But once an iterator like
reversed()
has been consumed, it is done. Iterating it again will
produce no values.
If “iterable” vs “iterator” is already confusing here’s a quick definition: an iterable is something that can be iterated, that can produce values in a particular order. An iterator tracks the state of an iteration in progress. An analogy: the pages of a book are iterable; a bookmark is an iterator. The English hints at it: an iter-able is able to be iterated at some point, an iterator is actively iterating.
The outer loop of my double loop was iterating only once over the rows, so the row iteration was fine whether it was going forward or backward. But the columns were being iterated again for each row. If the columns were going forward, they were a range, a reusable iterable, and everything worked fine.
But if the columns were meant to go backward, they were a one-use-only
iterator made by reversed()
. The first row would get all the columns,
but the other rows would try to iterate using a fully consumed iterator and get
nothing.
The simple fix was to use list()
to turn my iterator into a reusable
iterable:
cols = list(reversed(cols))
The code was slightly less nice, but it worked. An even better fix was to change my doubly nested loop into a single loop:
for row, col in itertools.product(rows, cols):
That also takes care of the original iterator/iterable problem, so I can get rid of that first fix:
cols = range(N)
if sliding_right:
cols = reversed(cols)
rows = range(N)
if sliding_down:
rows = reversed(rows)
for row, col in itertools.product(rows, cols):
...
Once I had this working, I wondered why product()
solved the
iterator/iterable problem. The docs have a sample Python
implementation that shows why: internally, product()
is doing just
what my list()
call did: it makes an explicit iterable from each of the
iterables it was passed, then picks values from them to make the pairs. This
lets product()
accept iterators (like my reversed range) rather than
forcing the caller to always pass iterables.
If your head is spinning from all this iterable / iterator / iteration talk,
I don’t blame you. Just now I said, “it makes an explicit iterable from each of
the iterables it was passed.” How does that make sense? Well, an iterator is an
iterable. So product()
can take either a reusable iterable (like a range
or a list) or it can take a use-once iterator (like a reversed range). Either
way, it populates its own reusable iterables internally.
Python’s iteration features are powerful but sometimes require careful thinking to get right. Don’t overlook the tools in itertools, and mind your iterators and iterables!
• • •
Some more notes:
1: Another way to reverse a range: you can slice them!
>>> range(4)
range(0, 4)
>>> range(4)[::-1]
range(3, -1, -1)
>>> reversed(range(4))
<range_iterator object at 0x10307cba0>
It didn’t occur to me to reverse-slice the range, since reversed
is
right there, but the slice gives you a new reusable range object while reversing
the range gives you a use-once iterator.
2: Why did product()
explicitly store the values it would need but
reversed
did not? Two reasons: first, reversed()
depends on the
__reversed__
dunder method, so it’s up to the original object to decide
how to implement it. Ranges know how to produce their values in backward order,
so they don’t need to store them all. Second, product()
is going to need
to use the values from each iterable many times and can’t depend on the
iterables being reusable.
Python Bytes
#440 Can't Register for VibeCon
<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* <a href="https://treyhunner.com/2024/10/switching-from-virtualenvwrapper-to-direnv-starship-and-uv/?featured_on=pythonbytes">Switching to direnv, Starship, and uv</a></em>*</li> <li><em>* <a href="https://rqlite.io?featured_on=pythonbytes">rqlite - Distributed SQLite DB</a></em>*</li> <li><em>* Some Markdown Stuff</em>*</li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=AXcQsRZRd8k' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="440">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by PropelAuth: <a href="https://pythonbytes.fm/propelauth77">pythonbytes.fm/propelauth77</a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: <a href="https://treyhunner.com/2024/10/switching-from-virtualenvwrapper-to-direnv-starship-and-uv/?featured_on=pythonbytes">Switching to direnv, Starship, and uv</a></strong></p> <ul> <li><p>Last week I mentioned that I’m ready to try direnv again, but secretly, I still had some worries about the process. Thankfully, Trey has a tutorial to walk me past the troublesome parts.</p></li> <li><p><a href="https://direnv.net?featured_on=pythonbytes">direnv</a> - an extension for your shell. It augments existing shells with a new feature that can load and unload environment variables depending on the current directory.</p></li> <li><p>Switching from virtualenvwrapper to direnv, Starship, and uv</p> <p>- Trey Hunner**</p> <ul> <li><p>Trey has solved a bunch of the problems I had when I tried direnv before</p> <ul> <li><p>Show the virtual environment name in the prompt</p></li> <li><p>Place new virtual environments in local <code>.venv</code> instead of in <code>.direnv/python3.12</code></p></li> <li><p>Silence all of the “loading”, “unloading” statements every time you enter a directory</p></li> <li><p>Have a script called </p> <pre><code>venv </code></pre> <p>to create an environment, activate it, create a </p> <pre><code>.envrc </code></pre> <p>file</p> <ul> <li>I’m more used to a <code>create</code> script, so I’ll stick with that name and Trey’s contents</li> </ul></li> <li><p>A </p> <pre><code>workon </code></pre> <p>script to be able to switch around to different projects.</p> <ul> <li>This is a carry over from “virtualenvwrapper’, but seems cool. I’ll take it.</li> </ul></li> <li><p>Adding </p> <pre><code>uv </code></pre> <p>to the mix for creating virtual environments.</p> <ul> <li>Interestingly including <code>--seed</code> which, for one, installs <code>pip</code> in the new environment. (Some tools need it, even if you don’t)</li> </ul></li> </ul></li> <li><p>Starship</p> <ul> <li>Trey also has some setup for Starship. But I’ll get through the above first, then MAYBE try Starship again.</li> <li>Some motivation <ul> <li>Trey’s setup is pretty simple. Maybe I was trying to get too fancy before</li> <li>Starship config in toml files that can be loaded with direnv and be different for different projects. Neato</li> <li>Also, Trey mentions his dotfiles repo. This is a cool idea that I’ve been meaning to do for a long time.</li> </ul></li> </ul></li> </ul></li> <li><p>See also:</p> <ul> <li><a href="https://www.pythonbynight.com/blog/terminal?featured_on=pythonbytes">It's Terminal - Bootstrapping With Starship, Just, Direnv, and UV</a> - Mario Munoz</li> </ul></li> </ul> <p><strong>Michael #2: <a href="https://rqlite.io?featured_on=pythonbytes">rqlite - Distributed SQLite DB</a></strong></p> <ul> <li><a href="https://fosstodon.org/@themlu/114852806589871969">via themlu, thanks</a>!</li> <li>rqlite is a lightweight, user-friendly, distributed relational database built on SQLite.</li> <li>Built on SQLite, the world’s most popular database</li> <li>Supports full-text search, Vector Search, and JSON documents</li> <li>Access controls and encryption for secure deployments</li> </ul> <p><strong>Michael #3</strong>: <a href="https://www.peterbe.com/plog/a-python-dict-that-can-report-which-keys-you-did-not-use?featured_on=pythonbytes">A Python dict that can report which keys you did not use</a></p> <ul> <li>by Peter Bengtsson</li> <li>Very cool for testing that a dictionary has been used as expected (e.g. all data has been sent out via an API or report).</li> <li>Note: It does NOT track d.get(), but it’s easy to just add it to the class in the post.</li> <li>Maybe someone should polish it up and put it on pypi (that person is not me :) ).</li> </ul> <p><strong>Brian #4: Some Markdown Stuff</strong></p> <ul> <li><p>Textual 4.0.0</p> <p>adds Markdown.append which can be used to efficiently stream markdown content</p> <ul> <li>The reason for the major bump is due to an interface change to Widget.anchor</li> <li>Refreshing to see a symantic change cause a major version bump.</li> </ul></li> <li><p>html-to-markdown</p> <ul> <li><p>Converts html to markdown</p></li> <li><p>A complete rewrite fork of markdownify</p></li> <li>Lots of fun features like “streaming support” <ul> <li>Curious if it can stream to Textual’s Markdown.append method. hmmm.</li> </ul></li> </ul></li> </ul> <p><strong>Joke: <a href="https://www.reddit.com/r/programminghumor/comments/1ko7ube/vibecon/?featured_on=pythonbytes">Vibecon is hard to attend</a></strong></p>
Programiz
Getting Started with Python
In this tutorial, you will learn to write your first Python program.
Seth Michael Larson
Email has algorithmic curation, too
Communication technologies should optimally be reliable, especially when both parties have opted-in to consistent reliable delivery. I don't want someone else to decide whether I receive a text message or email from a friend.
I associate "algorithmic curation" with social media platforms like TikTok, YouTube, Twitter, or Instagram. I don't typically think about email as a communication technology that contains algorithmic curation. Maybe that thinking should change?
Email for most people has algorithmic curation applied by their email provider. Email providers like Gmail automatically filter the email and decide which "category" the email ends up in, regardless of how much you trust the sender or if you have opted-in to their emails. Some of these categories are harmless, like "Social", where social media updates will be filtered into its own category but not hidden in any meaningful way.
The category that is destructive is one we know and love: "Spam". Spam filtering is usually a good thing, if you've ever looked in the folder you understand why it exists. However, many email providers don't give a way to opt-out of spam filtering, even for senders that have sent you hundreds of high-quality opted-in emails.
Where this is relevant is for email newsletters. I publish an email newsletter for this blog, and yet I would prefer you not use the newsletter and instead use RSS. If you enjoy the blog's content enough to get a notification when there's more, then you probably want delivery to be reliable.
My previous email was sent to the Spam folder for at least Gmail, and from reading the email I am not sure why this would be the case. The language isn't any different from the rest of my emails, and yet the number of deliveries and opens is less than half of a typical email.
As someone trying to communicate to readers, what am I supposed to learn or do in this situation? Just like with other algorithmically curated platforms, I feel like I'm at the mercy of a process that isn't understandable and prone to change without warning.
Reliable communication technologies like RSS are the answer. If you're a regular consumer of internet content I highly recommend installing an RSS feed reader. My personal recommendation (that I use and pay for) is Inoreader. You'd be surprised which platforms offer RSS as a reliable alternative to their typical curation approach, for example YouTube offers RSS feeds for channels.
As a web surfer I hope this article inspires you to choose a reliable communication technology like RSS when "subscribing" to internet creatives so you never miss another publication. If you're a publisher, providing your content through a reliable opt-in medium like RSS, Patreon, or even Discord means only you and your readers are in control of who sees your content.
July 14, 2025
Real Python
How to Debug Common Python Errors
Python debugging involves identifying and fixing errors in your code using tools like tracebacks, print()
calls, breakpoints, and tests. In this tutorial, you’ll learn how to interpret error messages, use print()
to track variable values, and set breakpoints to pause execution and inspect your code’s behavior. You’ll also explore how writing tests can help prevent errors and ensure your code runs as expected.
By the end of this tutorial, you’ll understand that:
- Debugging means identifying, analyzing, and resolving issues in your Python code using systematic approaches.
- Tracebacks are messages that help you pinpoint where errors occur in your code, allowing you to resolve them effectively.
- Using
print()
helps you track variable values and understand code flow, aiding in error identification. - Breakpoints let you pause code execution to inspect and debug specific parts, improving error detection.
- Writing and running tests before or during development aids in catching errors early and ensures code reliability.
Understanding these debugging techniques will empower you to handle Python errors confidently and maintain efficient code.
Get Your Code: Click here to download the free sample code that shows you how to debug common Python errors.
Take the Quiz: Test your knowledge with our interactive “How to Debug Common Python Errors” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Debug Common Python ErrorsTake this quiz to review core Python debugging techniques like reading tracebacks, using print(), and setting breakpoints to find and fix errors.
How to Get Started With Debugging in Python
Debugging means to unravel what is sometimes hidden. It’s the process of identifying, analyzing, and resolving issues, errors, or bugs in your code.
At its core, debugging involves systematically examining code to determine the root cause of a problem and implementing fixes to ensure the program functions as intended. Debugging is an essential skill for you to develop.
Debugging often involves using tools and techniques such as breakpoints, logging, and tests to achieve error-free and optimized performance of your code. In simpler terms, to debug is to dig through your code and error messages in an attempt to find the source of the problem, and then come up with a solution to the problem.
Say you have the following code:
cat.py
print(cat)
The code that prints the variable cat
is saved in a file called cat.py
. If you try to run the file, then you’ll get a traceback error saying that it can’t find the definition for the variable named cat
:
$ python cat.py
Traceback (most recent call last):
File "/path_to_your_file/cat.py", line 1, in <module>
print(cat)
^^^
NameError: name 'cat' is not defined
When Python encounters an error during execution, it prints a traceback, which is a detailed message that shows where the problem occurred in your code. In this example, the variable named cat
can’t be found because it hasn’t been defined.
Here’s what each part of this Python traceback means:
Part | Explanation |
---|---|
Traceback (most recent call last) |
A generic message sent by Python to notify you of a problem with your code. |
File "/path_to_your_file/cat.py" |
This points to the file where the error originated. |
line 1, in <module> |
Tells you the exact line in the file where the error occurred. |
print(cat) |
Shows you the line of Python code that caused the error. |
NameError |
Tells you the kind of error it is. In this example, you have a NameError . |
name 'cat' is not defined |
This is the specific error message that tells you a bit more about what’s wrong with the piece of code. |
In this example, the Python interpreter can’t find any prior definition of the variable cat
and therefore can’t provide a value when you call print(cat)
. This is a common Python error that can happen when you forget to define variables with initial values.
To fix this error, you’ll need to take a step-by-step approach by reading the error message, identifying the problem, and testing solutions until you find one that works.
In this case, the solution would be to assign a value to the variable cat
before the print call. Here’s an example:
cat.py
cat = "Siamese"
print(cat)
Notice that the error message disappears when you rerun your program, and the following output is printed:
$ python cat.py
Siamese
The text string stored in cat
is printed as the code output. With this error resolved, you’re well on your way to quickly debugging errors in Python.
In the next sections, you’ll explore other approaches to debugging, but first, you’ll take a closer look at using tracebacks.
Read the full article at https://realpython.com/debug-python-errors/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]