533

What's up with all those equals signs anyway?

For context, this is the Lars Ingebrigtsen who wrote the manual for Gnus[0], a common Emacs package for reading email and Usenet. It’s clever, funny, and wildly informative. Lars has probably forgotten more about email parsing than 99% of us here will ever have learned.

The manual itself says[1]:

> Often when I read the manual, I think that we should take a collection up to have Lars psycho-analysed.

0: https://www.gnu.org/software/emacs/manual/html_mono/gnus.htm...

1: https://www.gnus.org/manual.html

5 hours agokstrauser

Not only the manual, but Gnus itself. I remember this guy from the university (UiO) when he started working on Gnus. He was a small celebrity among us informatics students, and we all used Emacs and Gnus, of course.

3 hours agosovande

Also gmane. The once popular mailing lists search site.

25 minutes agorurban

I'd forgotten that! Yeah, I believe Lars also wrote a huge chunk of the current Gnus. I stopped using it a while back and maybe someone else came along and rewrote it again, replacing all his code, but I don't think that's the case.

Gnus was absolutely delightful back in the day. I moved on around the time I had to start writing non-plaintext emails for work reasons. It's also handy to be using the same general email apps and systems as 99.99% of the rest of the world. I still have a soft spot in my heart for it.

PS: Also, I have no idea whatsoever why someone would downvote you for that. Weird.

an hour agokstrauser

The real punchline is that this is a perfect example of "just enough knowledge to be dangerous." Whoever processed these emails knew enough to know emails aren't plain text, but not enough to know that quoted-printable decoding isn't something you hand-roll with find-and-replace. It's the same class of bug as manually parsing HTML with regex, it works right up until it doesn't, and then you get congressional evidence full of mystery equals signs.

9 hours agoruhith

> It's the same class of bug as manually parsing HTML with regex, it works right up until it doesn't

I'm sure you already know this one, but for anyone else reading this I can share my favourite StackOverflow answer of all time: https://stackoverflow.com/a/1732454

8 hours agolvncelot

I prefer the question about CPU pipelines that gets explained using a railroad switch as example. That one does a decent job of answering the question instead of going of on a, how to best put it, mentally deranged one page rant about regexes with the lazy throw away line at the end being the only thing that makes it qualify as an answer at all.

8 hours agojosefx

The regex answer is from the very old days of Stackoverflow, before fun was banned. I agree it barely qualifies as answer, but considering that the question has over 4 million page views (which almost puts it in the top 100 most viewed questions all-time), it has reached a lot people. The answer probably had much more influence than any serious answer on that topic. So I'd say the author did a good job.

8 hours agokapep

Of all the things I wrote on SO, including many actually-useful detailed explanations, it was this drunken rant that stuck, for some reason.

7 hours agobobince

And for that I applaud you.

I know it's a hassle for a platform to moderate good rants from bad ones, and I decry SO from pushing too hard against these. I truly believe that our industry would benefit from more drunken technical rants.

6 hours agofalcor84

I think of, and look up, this drunken rant at least once a year.

5 hours agoscott_s

People have shared it here and on reddit a bunch of times because it's funny. I always found the pragmatic counter-answer about using regex and the comments about how brittle it is to parse XML properly assuming a specific structure to be much more useful.

7 hours agoDangitBobby

But--and this is crucial--the one about regexes is hilarious.

It also comes from a time in Internet culture when humor was appreciated instead of aggressively downvoted.

5 hours agobityard

It's because the author put effort into it. Most (online) humour is lazy, low effort, regurgitated meme spam. See: Reddit. It should be downvoted and ideally never posted at all.

This is also the reason why I consider the lack of images in IRC a feature.

3 hours agoencom

It took me years to notice, but did you catch that the answer actually subtly misinterprets what the question is asking for?

Guy (in my reading) appears to talk about matching an entire HTML document with regex. Indeed, that is not possible due to the grammars involved. But that is not what was being asked.

What was being asked is whether the individual HTML tags can be parsed via regex. And to my understanding those are very much workable, and there's no grammar capability mismatch either.

4 hours agoperching_aix

The thing is, even when parsing html "correctly" (whatever that is) regexes will still be used. Sure, There will be a bunch of additional structures and mechanisms involved, but you will be identifying tokens via a bunch of regexes.

So yes, while it is an inspired comidic genius of a rant, and sort of informative in that it opens your eyes to the limitations of regexes, it sort of brushes under the rug all the places that those poor maligned regular expressions will be used when parsing html.

2 hours agosomat

I think even for single opening tags like asked there are impossible edge cases.

For example, this is perfectly valid XHTML:

    <a href="/" title="<a /> />"></a>
4 hours agotiagod

No, that is not valid. The "<" and ">" characters in string values must always be escaped with &lt; and &gt;. The correct form would be:

    <a href="/" title="&lt;a /&gt; /&gt;"></a>
an hour agochungy

If you already know where the start of the opening tag is, then I think a regex is capable of finding the end of that same opening tag, even in cases like yours. In that sense, it’s possible to use a regex to parse a single tag. What’s not possible is finding opening tags within a larger fragment of HTML.

3 hours agocomex

For any given regex, an opponent can craft a string which is valid HTML but that the regex cannot parse. There are a million edge cases like:

  <!—- Don't count <hr> this! -—> but do count <hr> this -->
and

  <!-- <!-- Ignore <ht> this --> but do count <hr> this —->
Now your regex has to include balanced comment markers. Solve that

You need a context-free grammar to correctly parse HTML with its quoting rules, and escaping, and embedded scripts and CDATA, etc. etc. etc. I don't think any common regex libraries are as powerful as CFGs.

Basically, you can get pretty far with regexes, but it's provably (like in a rigorous compsci kinda way) impossible to correctly parse all valid HTML with only regular expressions.

an hour agokstrauser

HE COMES

8 hours agoCthulhu_

I know this is grumpy but this I’ve never liked this answer. It is a perfect encapsulation of the elitism in the SO community—if you’re new, your questions are closed and your answers are edited and downvoted. Meanwhile this is tolerated only because it’s posted by a member with high rep and username recognition.

8 hours agobayesnet

I think this answer was tolerated when SO wasn't as bad as it is now, and wouldn't be tolerated now from anyone.

7 hours ago1718627440

It's because SO at the time was a small high-trust society where "everyone knew each other" and so things flew back then that wouldn't fly now.

5 hours agobombcar

As someone who used to write custom crawlers 20 years ago, I can confirm that regular expressions worked great. All my crawlers were custom designed for a page and the sites were mostly generated by some CMS and had consistent HTML. I don't remember having to do much bug fixes that were related to regular expression issues.

I don't suggest writing generic HTML parsers that works with any site, but for custom crawlers they work great.

Not to say that the tools available are the same now as 20 years ago. Today I would probably use puppeteer or some similar tool and query the DOM instead.

7 hours agothrowaway_61235

An interesting thing is that most webpages are generated using text templates. There's some text processing like escaping special characters, but it's mostly text that happened to be (somewhat) valid HTML.

So extracting information from this text with regexps often makes perfect sense.

4 hours agovbezhenar

I would distinguish between parsing and scraping. Parsing really needs a, well, parser. Otherwise you’ll get things wrong on perfectly well formed input and your program will be brittle and weird.

A scraper is already resigned to being brittle and weird. You’re relying not only on the syntax of the data, but an implicit structure beyond that. This structure is unspecified and may change without notice, so whatever robustness you can achieve will come from being loose with what you accept and trying to guess what changes might be made on the other end. Regex is a decent tool for that.

5 hours agowat10000

Funny how differently people can perceive things. That's my least favorite SO answer of all time, and I cringe every time I see it.

It's a very bad answer. First of all, processing HTML with regex can be perfectly acceptable depending on what you're trying to do. Yes, this doesn't include full-blown "parsing" of arbitrary HTML, but there are plenty of ways in which you might want to process or transform HTML that either don't require producing a parse tree, don't require perfect accuracy, or are operating on HTML whose structure is constrained and known in advance. Second, it doesn't even attempt to explain to OP why parsing arbitrary HTML with regex is impossible or poorly-advised.

The OP didn't want his post to be taken over by someone hamming it up with an attempt at creative writing. He wanted a useful answer. Yes, this answer is "quirky" and "whimsical" and "fun" but I read those as euphemisms for "trying to conscript unwilling victims into your personal sense of nerd-humor".

5 hours agoumanwizard

There's nothing that brings joy into this world quite like the guy waiting around to tell people he doesn't like the thing they like.

5 hours agochucksmash

The whole argument hinges on one word in your post: arbitrary.

I parse my own HTML I produce directly in a context where I fully control the output. It works fine, but parsing other people’s HTML is a lesson in humility. I’ve also done that, but I did it as a one time thing. I parsed a specific point in time, refusing to change that at any point.

5 hours agophilistine

It also hinges on another word: parsing. There are things other than parsing that you might want to do. For example, if you want to count the number of `<hr>` tags in an HTML document, that doesn't require parsing it, and can indeed be done with regex.

5 hours agoumanwizard

No you can’t. You can have an unescaped <hr> inside a script tag, for example. The best you can do is a simple string search for “<hr>” and hope it’s returning what you think it might be returning. Regexps are not powerful enough to determine whether any particular instance of “<hr>” is actually an HTML tag.

Like, it’s not a matter of cleverness, either. You can’t code around it. It’s simply not possible.

4 hours agokstrauser

And because the output still looks mostly readable, nobody questions it until years later when it's suddenly evidence in front of Congress

4 hours agoErigmolCt

They have top men working on it right now.

9 hours agoV__

> We see that that’s a quite a long line. Mail servers don’t like that

Why do mail server care about how long a line is? Why don't they just let the client reading the mail worry about wrapping the lines?

10 hours agotiborsaas

SMTP is a line–based protocol, including the part that transfers the message body

The server needs to parse the message headers, so it can't be an opaque blob. If the client uses IMAP, the server needs to fully parse the message. The only alternative is POP3, where the client downloads all messages as blobs and you can only read your email from one location, which made sense in the year 2000 but not now when everyone has several devices.

9 hours agodirewolf20

Hey, POP3 still makes sense. Having a local copy of your emails is useful.

8 hours agofluoridation

If you want it to be the only copy and not sync with anything

POP3 is line–based too, anyway. Maybe you can rsync your maildir?

7 hours agodirewolf20

I just read it mainly in one place and through the web interface when I have to.

7 hours agofluoridation

If your "in one place" reader is still open and downloading messages then there will be no messages to view in the web interface when you have to.

6 hours agodylan604

There will, because my client doesn't delete the messages from the server when it downloads them.

4 hours agofluoridation

POP3 is more for reading and acting on your email in one place (taking notes, plan actions, discard and delete,…). No need to consume them on other devices as you’ve already extracted the important bits.

I use imap on my mobile device, but that’s mostly for recent emails until I get to my computer. Then it’s downloaded and deleted from the server.

4 hours agoskydhash

Isn’t the only difference between pop and imap that pop removes the mail from the server? I only use imap, and all my email is available offline.

3 hours agoJaxan

POP is a simple mail transfer protocol (hehe...). It supports three things: get number of mails, download mail by number, delete mail by number. This is what you need to move mails in bulk from one point to another. POP3 mail clients are local maildir clients that use POP3 to get new mail from the server. It's like SMTP if it were based on polling.

IMAP is an interactive protocol that is closer to the interaction between Gmail frontend and backend. It does many things. The client implements a local view of a central source of truth.

an hour agodirewolf20

No, the difference is that IMAP doesn't store anything other than headers on the client (at least, not until the user tries to read a message), while POP3 eagerly downloads messages whenever they're available. A POP3 client can be configured with various remote retention policies, or even to never delete downloaded messages.

I don't have an IMAP account available to check, but AFAIK, you should not have locally the content of any message you've never read before. The whole point of IMAP is that it doesn't download messages, but instead acts like a window into the server.

3 hours agofluoridation

Also, IMAP syncs the other way. If you locally tag a message locally or move it to another folder, it also happens on the server.

2 hours agommooss

But it's more akin to consuming a message queue. You have fetched it, it's gone.

5 hours agoahoka

This is incorrect. POP3 does not require fetched messages to be deleted from the server.

an hour agoforesto

Nothing stops you from locally archiving your email with IMAP.

3 hours agoencom

How do you do that, by default? Can you tell an IMAP client to work like POP3 and download everything?

2 hours agommooss

In Thunderbird you can "Select this folder for offline use".

44 minutes agomasfuerte

Some you can

an hour agodirewolf20

Mails are (or used to be) processed line-by-line, typically using fixed-length buffers. This avoids dynamic memory allocation and having to write a streaming parser. RFC 821 finally limited the line length to at most 1000 bytes.

Given a mechanism for soft line breaks, breaking already at below 80 characters would increase compatibility with older mail software and be more convenient when listing the raw email in a terminal.

This is also why MIME Base64 typically inserts line breaks after 76 characters.

9 hours agolayer8

In early days, many/most people also read their email on terminals (or printers) with 80-column lines, so breaking lines at 72-ish was considered good email etiquette (to allow for later quoting prefix ">" without exceeding 80 characters).

7 hours agoSoftTalker

One of the technical marvels of the day were mail and usenet clients that could properly render quoted text from infinite, never ending flame wars!

4 hours agobjourne

I don't think kids today realize how little memory we had when SMTP was designed.

For example, the PDP-11 (early 1970s), which was shared among dozens of concurrent users, had 512 kilobytes of RAM. The VAX-11 (late 1970s) might have as much as 2 megabytes.

Programmers were literally counting bytes to write programs.

4 hours agoGMoromisato

This is how email work(ed) over smtp. When each command was sent it would get a '200'-class message (success) or 400/500-class message (failure). Sound familiar?

telnet smtp.mailserver.com 25

HELO

MAIL FROM: me@foo.com

RCPT TO: you@bar.com

DATA

blah blah blah

how's it going?

talk to you later!

.

QUIT

9 hours agoliveoneggs

For anyone who wants to try this against a modern server:

    openssl s_client -connect smtp.mailserver.com:smtps -crlf
    220 smtp.mailserver.com ESMTP Postfix (Debian/GNU)
    EHLO example.com
    250-smtp.mailserver.com
    250-PIPELINING
    250-SIZE 10240000
    250-VRFY
    250-ETRN
    250-AUTH PLAIN LOGIN
    250-ENHANCEDSTATUSCODES
    250-8BITMIME
    250-DSN
    250-SMTPUTF8
    250 CHUNKING

    MAIL FROM:me@example.com
    250 2.1.0 Ok

    RCPT TO:postmaster
    250 2.1.5 Ok

    DATA
    354 End data with <CR><LF>.<CR><LF>

    Hi
    .
    250 2.0.0 Ok: queued as BADA579CCB

    QUIT
    221 2.0.0 Bye
7 hours ago1718627440

This brings back some fun memories from the 1990s when this was exactly how we would send prank emails.

8 hours agoTelemakhos

Yep! And also, if you included a blank line and then the headers for a new email in the bottom of your message, you could tell the server, hey, here comes another email for you to process!

If you were typing into a feedback form powered by something from Matt’s Script Archive, there was about a 95% chance you could trivially get it to send out multiple emails to other parties for every one email sent to the site’s owner.

5 hours agokstrauser

That was nice part of 1990s - many systems allow for funny things ;)

6 hours agofix4fun

I like how SMTP was at least honest in calling it the "receipt to" address and not the "sender" address.

Edit: wrong.

7 hours agoxg15

RCPT TO specifies the destination (recipient) address, the "sender" is what is written in MAIL FROM.

However what most mail programs show as sender and recipient is neither, they rather show the headers contained in the message.

7 hours ago1718627440

Ah, sorry. You're right.

7 hours agoxg15

Back in 80s-90s it was common to use static buffers to simplify implementation - you allocate a fixed size buffer and reject a message if it has a line longer than the buffer size. SMTP RFC specifies 1000 symbols limit (including \r\n) but it's common to wrap around 87 symbols so it is easy to examine source (on a small screen).

9 hours agocitrin_ru

The simplest reason: Mail servers have long had features which will send the mail client a substring of the text content without transferring the entire thing. Like the GMail inbox view, before you open any one message.

I suspect this is relevant because Quoted Printable was only a useful encoding for MIME types like text and HTML (the human readable email body), not binary (eg. Attachments, images, videos). Mail servers (if they want) can effectively treat the binary types as an opaque blob, while the text types can be read for more efficient transfer of message listings to the client.

10 hours agothephyber

As far as I can remember, most mail servers were fairly sane about that sort of thing, even back in the 90’s when this stuff was introduced. However, there were always these more or less motivated fears about some server somewhere running on some ancient IBM hardware using EBCDIC encoding and truncating everything to 72 characters because its model of the world was based on punched cards. So standards were written to handle all those bizarre systems. And I am sure that there is someone on HN who actually used one of those servers...

10 hours agoPinus

Thanks, I really expected a tale from the 70's, but did not see punch cards coming :)

9 hours agotiborsaas

The influence of 80 column punch cards remains pervasive.

9 hours agojibal
[deleted]
8 hours ago

RFC822 explicitly says it is for readability on systems with simple display software. Given that the protocol is from 1982 and systems back then had between 4 and 16kb RAM in total it might have made sense to give the lower end thin client systems of the day something preprocessed.

9 hours agojosefx

You could expect a lot more (512kB, 1MB, 2MB) in an internet-connected machine running Unix or VMS.

4 hours agobadc0ffee

Also it is an easy way to stop a denial of service attack. If you let an infinite amount in that field. I can remotely overflow your system memory. The mail system can just error out and hang up on the person trying the attack instead of crashing out.

8 hours agosumtechguy

Surely you don't need the message to be broken up into lines just for that. Just read until a threshold is reached and then close the connection.

8 hours agofluoridation

Keep in mind that in ye olden days, email was not a worldwide communication method. It was more typical for it to be an internal-only mail system, running on whatever legacy mainframe your org had, and working within whatever constraints that forced. So in the 90s when the internet began to expand, and email to external organizations became a bigger thing, you were just as concerned with compatibility with all those legacy terminal-based mail programs, which led to different choices when engineering the systems.

10 hours agocodingdave

This is incorrect

9 hours agoliveoneggs

Are you certain? Not OP, but a huge chunk of early RFCs was about how to let giant IBM systems talk to everyone else, specifying everything from character sets (nearly universally “7-bit ASCII”) to end of line/message characters. Otherwise, IBM would’ve tried to make EBCDIC the default for everything.

For instance, consider FTP’s text mode, which was primarily a way to accidentally corrupt your download when you forgot to type “bin” first, but was also handy for getting human readable files from one incompatible system to another.

5 hours agokstrauser

I had a pre-'@' email address and it was able to communicate all over the world.

3 hours agoliveoneggs

My first reading was that you were disagreeing with the bits about email worrying about compatibility, and that part seemed reasonably true to me.

As to the other bits, I think even in the uucp era, email was mostly internal, by volume of mail sent, even though you could clearly talk to remote sites if everything was set up correctly. It was capable of being a worldwide communication system. I bet the local admins responsible for monitoring the telephone bill preferred to keep that in check, though.

2 hours agokstrauser

I thought the article would be about the various meanings of operators like = == === .=. <== ==> <<== ==>> (==) => =~=

9 hours agoheikkilevanto

What is this, a Haskell for ants?

9 hours agodirewolf20

It has to be at least… three times bigger than this

9 hours agodkga

My fist association was brainf..k (*.bf) programming language

6 hours agofix4fun

This ended up being way more interesting

4 hours agoErigmolCt

The most interesting thing to me wasn't the equals signs, which I knew are from quoted-printable, but the fact that when an equals sign appears, a letter that should have been preceding or following it is missing. It's as if an off-by-one error has occurred, where instead of getting rid of the equals sign, it's gotten rid of part of the actual text. Perhaps the CRLF/LF thing is part of it.

6 hours agoTazeTSchnitzel

The article goes into exactly why this happens!

5 hours agobtown

That's exactly how you end up with mystery missing characters in something that's supposed to be evidence

4 hours agoErigmolCt

I'm just wondering why this problem shows up now. Why do lots of people suddenly post their old emails with a defective QP decoder?

> For some reason or other, people have been posting a lot of excerpts from old emails on Twitter over the last few days.

On the risk of having missed the latest meme or social media drama, but does anyone know what this "some reason or other" is?

Edit: Question answered.

9 hours agoxg15

Presumably the Epstein files, but I'm not on twitter so not sure

9 hours agoSCdF

Ooh, that reason. Sorry for having been dense. Thanks!

9 hours agoxg15

Jeff Epstein? The New York financier?

7 hours agoavemg

the DOJ published another bunch of Epstein emails

9 hours agoropp
[deleted]
9 hours ago

[flagged]

9 hours agojychang

Of course the Epstein files are serious.

But not everybody has every single global development / news event IVed into their veins. Many of us just don’t keep updated on global news such that we may not be aware of an event that happened in the last 3 days.

Important news tends to get to me eventually. And there is usually nothing I can do about something personally anyway (at least within a short time horizon), so there is really very little value in trying to stay informed of the absolute latest developments. The signal to noise ratio is far too low, and it also induces a bunch of unnecessary anxiety and stress.

So yes, believe it or not very many people are unaware of this.

7 hours agosd9

I wrote my own email archiving software. The hardest part was dealing with all the weird edge cases in my 20+ year collection of .eml files. For being so simple conceptually, email is surprisingly complicated.

9 hours agothedanbob

Email is one of those cursed standards where the committee wasn't building a protocol from scratch, but rather trying to build a universal standard by gluing together all of the independently developed existing systems in some way that might allow them to interoperate. Verifying that a string a user has typed is a valid email address is close to impossible short of just throwing up your hands and allowing anything with a @ somewhere in it.

an hour agojandrese

I wrote a console-based mail client, which was 25% C++ and 75% Lua for defining the UI and the processing.

It never got too popular, but I had users for a few years and I can honestly say MIME was the bane of my life for most of those years.

6 hours agostevekemp

Indeed. A big chunk of my email parser deals with missing or incorrect content headers. Most of the rest attempts to sensibly interpret the infinite combinations of parts found in multipart (and single-part!) emails.

3 hours agothedanbob

> So what’s happened here? Well, whoever collected these emails first converted from CRLF (i.e., “Windows” line ending coding) to “NL” (i.e., “Unix” line ending coding). This is pretty normal if you want to deal with email. But you then have one byte fewer:

I think there is a second possible conclusion, which is that the transformation happened historically. Everyone assumes these emails are an exact dump from Gmail, but isn't it possible that Epstein was syncing emails from Gmail to a third party mail server?

Since the Stackoverflow post details the exact situation in 2011, I think we should be open to the idea that we're seeing data collected from a secondary mail server, not Gmail directly.

Do we have anything to discount this?

(If I'm not mistaken, I think you can also see the "=" issue simply by applying the Quoted-Printable encoding twice, not just by mishandling the line-endings, which also makes me think two mail servers. It also explains why the "=" symbol is retained.)

10 hours agobeejiu

In one of the email PDFs I saw an XML plist with some metadata that looked like it was from Apple's Mail.app, so these might be extracted from whatever internal format that uses.

6 hours agoTazeTSchnitzel

When they process these emails, it's fairly common to import everything into a MS Outlook PST file (using whatever buggy tool). That's probably why these look like Outlook printouts even though its Yahoo mail or etc.

2 hours agoflomo

Yeah, I wouldn't bet on this being a single bad Gmail export; it smells much more like the accumulated scars of multiple mail systems doing "helpful" things to the same messages over time

4 hours agoErigmolCt

This seems like the most likely reason to me!

10 hours agoMoltenMan

27b09b80f93cecf1-000000001b5e2c7f-0000000069825928

36 minutes agoHanzklatil369
[deleted]
6 hours ago

I love how HN always floats up the answers to questions that were in my mind, without occupying my mind.

I, too, was reading about the new Epstein files, wondering what text artifact was causing things to look like that.

11 hours agolordnacho

Same here. I did notice what I think was an actual error on someone's part, there was a chart in the files comparing black to white IQ distributions, and well, just look at it:

https://nitter.net/AFpost/status/2017415163763429779?s=201

Something clearly went wrong in the process.

11 hours agoAlphaAndOmega0

Me too. I first assumced it was an OCR error, then remembered they were emails and wouldn't need to go through OCR. Then I thought that the US Government is exactly the kind of place to print out millions of emails only to scan them back in again.

I'm glad to know the real reason!

10 hours agofredley

I just want to add that I would expect the exact same thing from the German government. Glad to see we're not all that different

2 hours agorireads
[deleted]
9 hours ago

CLRF vs LF strikes again. Partly at least.

I wonder why even have a max line length limit in the first place? I.e. is this for a technical reason or just display related?

11 hours agoquibono

Wait, now we have to deal with Carriage Line Return Feeds too?

I wonder if the person who had the idea of virtualizing the typewriter carriage knew how much trouble they would cause over time.

8 hours agobrk

Yeah, and using two bytes for a single line termination (or separation or whatever)? Why make things more complicated and take more space at the same time?

7 hours agokeybored

Remember that back in the mists of time, computers used typewriter-esque machines for user interaction and text output. You had to send a CR followed by an LF to go to the next line on the physical device. Storing both characters in the file meant the OS didn't need to insert any additional characters when printing. Having two separate characters let you do tricks like overstriking (just send CR, no LF)

7 hours agofloren

True, but I don’t think there was a common reason to ever send a linefeed without going back to the beginning. Were people printing lots of vertical pipe characters at column 70 or something?

It would’ve been far less messy to make printers process linefeed like \n acts today, and omit the redundant CR. Then you could still use CR for those overstrike purposes but have a 1-byte universal newline character, which we almost finally have today now that Windows mostly stopped resisting the inevitable.

5 hours agokstrauser

As I understand it (this may be apocryphal but I've seen it in multiple places) the print head on simple-minded output devices didn't move fast enough to get all the way back over to the left before it started to output the next character. Making LF a separate character to be issued after CR meant that the line feed would happen while the carriage was returning, and then it's ready to print the next character. This lets you process incoming characters at a consistent rate; otherwise you'd need some way to buffer the characters that arrived while the CR was happening.

Now, if you want to use CR by itself for fancy overstriking etc. you'd need to put something else into the character stream, like a space followed by a backspace, just to kill time.

3 hours agofloren

I don't think that's right. Not saying that to argue, more to discuss this because it's fun to think about.

In any event, wouldn't you have to either buffer or use flow-control to pause receiving while a CR was being processed? You wouldn't want to start printing the next line's characters in reverse while the carriage was going back to the beginning.

My suspicion is there was a committee that was more bent on purity than practicality that day, and they were opposed to the idea of having CR for "go to column 0" and newline for "go to column 0 and also advance the paper", even though it seems extremely unlikely you'd ever want "advance the paper without going to column 0" (which you could still emulate it with newline + tab or newline + 43 spaces for those exceptional cases).

2 hours agokstrauser

I've seen this explanation multiple times through the years, but as I said it's entirely possible it was just a post-hoc thing somebody came up with. But as you said, it's fun to argue/think about, so here's some more. I'm talking about the ASR-33 because they're the archetypal printing terminal in my mind.

If you look at the schematics for an ASR-33, there's just 2 transistors in the whole thing (https://drive.google.com/file/d/1acB3nhXU1Bb7YhQZcCb5jBA8cer...). Even the serial decoding is done electromechanically (per https://www.pdp8online.com/asr33/asr33.shtml), and the only "flow control" was that if you sent XON, the teletype would start the paper tape reader -- there was no way, as far as I can tell, for the teletype to ask the sender to pause while it processes a CR.

These things ran at 110 baud. If you can't do flow control, your only option if CR takes more than 1/10th of a second is to buffer... but if you can't do flow control, and the computer continues to send you stuff at 110 baud, you can't get that buffer emptied until the computer stops sending, so each subsequent CR will fill your buffer just a little bit more until you're screwed. You need the character following CR (which presumably takes about 2/10ths of a second) to be a non-printing character... so splitting out LF as its own thing gives you that and allows for the occasional case where doing a linefeed without a carriage return is desirable.

Curious Marc (https://www.curiousmarc.com/mechanical/teletype-asr-33) built a current loop adapter for his ASR-33, and you'll note that one of the features is "Pin #32: Send extra NUL character after CR (helps to not loose first char of new line)" -- so I'd guess that on his old and probably worn-out machine, even sending LR after CR doesn't buy enough time and the next character sometimes gets "lost" unless you send a filler NUL.

Now, I haven't really used serial communications in anger for over a decade, and I've never used a printing terminal, so somebody with actual experience is welcome to come in and tell me I'm wrong.

an hour agofloren

That's fascinating! They got a lot of mileage out of those 2 transistors, didn't they?

But see, that's why I think there has to be more to it. That extra LF character wouldn't be enough to satisfy the timing requirements, so you'd also need to send NUL to appropriately pad the delay time. And come to think of it, the delay time would be proportional to the column the carriage was on when you sent the CR, wouldn't it? I guess it's possible that it always went to the end but that seems unlikely, not least because if that were true then you'd never need to send CR at all, just send NUL or space until you calculated it was at EOL.

31 minutes agokstrauser

> now that Windows mostly stopped resisting the inevitable

I've been trying to get Visual Studio to stop mucking with line endings and encodings for years. I've searched and set all the relevant settings I could find, including using a .editorconfig file, but it refuses to be consistent. Someone please tell me I'm wrong and there's a way to force LF and UTF-8 no-BOM for all files all the time. I can't believe how much time I waste on this, mainly so diffs are clean.

3 hours agosaila

I haven't seen them other than in the submission - but if the length matches up it may be that they were processed from raw email, the RFC defines a length to wrap at.

Edit: yes I think that's most likely what it is (and it's SHOULD 78ch; MUST 998ch) - I was forgetting that it also specifies the CRLF usage, it's not (necessarily) related to Windows at all here as described in TFA.

Here it is in my 'notmuch-more' email lib: https://github.com/OJFord/amail/blob/8904c91de6dfb5cba2b279f...

10 hours agoOJFord

> it's not (necessarily) related to Windows at all here as described in TFA.

The article doesn't claim that it's Windows related. The article is very clear in explaining that the spec requires =CRLF (3 characters), then mentions (in passing) that CRLF is the typical line ending on Windows, then speculates that someone replaced the two characters CRLF with a one character new line, as on Unix or other OSs.

10 hours agoFabHK

Ok yeah I may have misinterpreted that bit in the article. It would be a totally reasonable assumption if you didn't happen to know that about email though, it wasn't a judgement regardless.

10 hours agoOJFord

I am just wondering how it is good idea for a sever to insert some characters into user's input. If a collegue were to propose this, i d laugh in his face

It's just sp hacky i cant belive it's a real life's solution

10 hours agodgan

“Insert characters”?

Consider converting the original text (maintaining the author’s original line wrapping and indentation) to base64. Has anything been “inserted” into the text? I would suggest not. It has been encoded.

Now consider an encoding that leaves most of the text readable, translates some things based on a line length limit, and some other things based on transport limitations (e.g. passing through 7-bit systems.) As long as one follows the correct decoding rules, the original will remain intact - nothing “inserted.” The problem is someone just knowledgeable enough to be aware that email is human readable but not aware of the proper decoding has attempted to “clean up” the email for sharing.

10 hours agojagged-chisel

Okey it does sound better from this POV. Still wierd as its a Client/UI concern, not something a server is supposed to do; whats next,adding "bold" tags on the title? Lol

10 hours agodgan

SMTP is a line-oriented protocol. The server processes one line at a time, and needs to understand headers.

Infinite line length = infinite buffer. Even worse, QP is 7-bit (because SMTP started out ASCII only), so characters >127 get encoded as three bytes (equal, then two hex digits), so a 500-character non-ASCII UTF8 line is 1500 bytes.

It all made sense at the time. Not so much these days when 7-bit pipes only exist because they always have.

8 hours agobrookst

When you post a comment on HN, the server inserts HTML tags into your input. Isn't that essentially the same thing?

10 hours agoflexagoon

No, because there is a clear separation between the content and the envelop. You wouldnt expect the post office to open your physical letters and write routing instructions to the postmen for delivery

But I agree with sibling comment: it makes more sense when its called "encoding" instead of "inserting chars into original stream"

10 hours agodgan

> You wouldnt expect the post office to open your physical letters and write routing instructions to the postmen for delivery

Digital communication is based on the postmen reading, transcribing and copying your letters. There is a reason why digital communication is treated differently then letters by the law and why the legally mandated secrecy for letters doesn't apply to emails.

7 hours ago1718627440

It's called escaping, and almost every protocol has it. HN must convert the & symbol to &amp; for displaying in HTML. Many wire protocols like SATA or Ethernet must insert a 1 after a certain number of consecutive 0s to maintain electrical balance. Don't remember which ones — don't quote me that it's SATA and Ethernet.

9 hours agodirewolf20

Protocols that literally insert a bit are HDLC / PPP / CAN and they insert a 0 after a few 1s

5 hours agozoho_seni

Just wait until you learn what mess UTF-8 will turn your characters into. ;)

9 hours agolayer8

What's funny is that the failure mode here is so quietly destructive

4 hours agoErigmolCt

My main takeaway from this article, is that I want to know what happened to the modified pigs with non-cloven hoofs

9 hours agovoxelghost

    cat title | sed 's/anyway/in email/'
would save a click for those already familiar with =20 etc.
9 hours agolucb1e

Great. Can't wait for equal signs to be the next (((whatever this is))). Maybe it's a secret code. j/k

On a side note: There are actually products marketed as kosher bacon (it's usually beef or turkey). And secular Jews frequently make jokes like this about our kosher bros who aren't allowed to eat the real stuff for some dumb reason like it has too many toes.

10 hours agonoduerme

Great. Can't wait for equal signs to be the next (((whatever this is))). Maybe it's a secret code. j/k

Yeah clearly you guys are the biggest victims in all this... get in there and make it about you!

2 hours agoLAC-Tech

"It’s a fascinating case of 'Abstraction Leak'.

We’ve become so accustomed to modern libraries handling encoding transparently that when raw data surfaces (like in these dumps), we often lack the 'Digital Archeology' skills to recognize basic Quoted-Printable.

These artifacts (=20, =3D) are effectively fossils of the transport layer. It’s a stark reminder that underneath our modern AI/React/JSON world, the internet is still largely held together by 7-bit ASCII constraints and protocols from the 1980s.

8 hours agoMarginalGainz

TLDR "=\r\n" was converted to "=\n"

11 hours agoseydor

Author seems to think Unix uses a character called "NL" instead of "LF"...

11 hours agonetsharc

Unicode labels U+000A as all of "LINE FEED (LF)", "new line (NL)" and "end of line (EOL)". I'm guessing different names were imported from slightly different character sets, although I understand the all-uppercase name to be the main/official one.

https://www.unicode.org/charts/PDF/U0000.pdf

10 hours agodebugnik

Oh okay... for a technical article, referrring to 0A with two different names within the same sentence of each other is not confusing at all... /S

Geezus...

3 hours agonetsharc

NL, or New Line, is a character in some character sets, like old mainframe computers. No need to be snarky just because he mistyped or uses a different name for something.

10 hours agomatsemann

I am more surprised by the description of “rock döts”. A Norwegian certainly knows that ASCII is not enough for all our alphabetical needs.

10 hours agodb_admin

https://en.wikipedia.org/wiki/Metal_umlaut

The writer presumably knows that umlauts and other non-ascii characters are functional in many languages. "rock döts" is poking fun at the trend in a certain tranche of anglophone rock/metal to use them in a purely aesthetic way in band names etc.

8 hours agotopaz0

No, the article is quite explicit that that isn't what happened.

10 hours agothaumasiotes

[dead]

8 hours agoVoodooJuJu

[flagged]

7 hours agoValveFan6969

Could be worsened by inaccurate optical character recognition in some cases.

Back in those days optical scanners were still used.

10 hours agobrador

People posting Excel formulae?

9 hours agozabzonk

Rock dots? You mean diacritics? Yeah someone invented them: the ancient Greeks, idiöt.

10 hours agoccppurcell

It's not the character, its the way / context in which it's used

https://en.wikipedia.org/wiki/Metal_umlaut

10 hours agoRHSeeger

I know what he was referring to. But the use case is obviously languages other than English, not the Motörhead fan club newsletter.

9 hours agoccppurcell

Some combination of people misunderstood some other people's joke, not totally clear which and which.

8 hours agotopaz0

Yeah, that dude oughta read books and learn about computers, too.

10 hours agochr

And live in a country where they use these in their alphabets.