If you don’t know what is Unicode, UTF-8 or UTF-16 or if need to refresh your memory on these topics, then before reading this article, check this one What everyone should know about Unicode.
I have divided this article in two parts, one for Python 2 and one for Python 3, since there are some differences in how each handles string and unicode in general. You can either read both of them or read just for the language you are interested in. This article is for Python 2. For Python 3, check out this article.
Python 2 has two built-in types for handling strings. One is the str
type and the other one is unicode
. All the string literals that you normally create in your python program like
are the str
type string literals. These are also called the byte strings as they are merely series of bytes. For example, if you do
you would get the output as follows-
As you can see from the output above, length of this string is just the number of bytes in the string. One thing to note in the above example is that, the last two bytes are non ASCII characters since they are greater than \x7f
but they are also counted as valid bytes. You can use any byte in the range from \x00
to \xff
in the byte string.
Now to enter unicode characters, you have several options-
u
prefix likeThis would give you a unicode
type string.
By default, unicode function uses the python default encoding scheme which is usually ascii
. You can get the python default encoding scheme of your system by
You should never change this default encoding since many other programs and modules rely on this default behaviour and may break if you change it to something else. You should instead pass your desired encoding as the argument to the unicode function.
In the example above, the encoding argument specifies the encoding of the input string given as argument to the unicode function. Here, I am passing ‘utf8’ as I am using \xce\xa9
which is the utf-8 encoding for the Ω character.
str
decode function. You can call the decode function on the str
string object and pass in the encoding of the string being called upon to get the unicode string.Finally to convert the unicode string back into normal str
or byte string, use the decode counterpart function encode.
Just pass in the encoding and it will give you the encoded byte string. On my machine it gave the ouput
\xfe\xff
is the BOM character followed by the utf-16 encoding of the Ω character \x03\xa9
, but in the reverse order since my machine is a little-endian machine. You may get different result on your machine depending upon your machine endianess.
Now suppose, you want to read in a file which is encoded using UTF-8 or UTF-16. You can use the python open()
function, but it doesn’t understand about encodings. It will just read the bytes of the file and dump it back to you. So, either you could write your own decoder for decoding those bytes to unicode or instead you can use nice open()
function provided by the codecs
module.
Similary, you can use the file.write()
function to write a unicode string to a file and the codecs
module will internally convert your string into the proper series of bytes and write to the disk.
Generally when you have to work with Unicode, what you can do is accept the input from any source, it may be from the user, from a network, from a file, in any encoding, and convert all the data into unicode internally. Then you can safely work on that unicode data, because you know that all the data is in the same format. When the times comes to write or send the data, then convert it back to the original encoding.
Thank you for reading my article. Let me know if you liked my article or any other suggestions for me, in the comments section below. And please, feel free to share :)