Troublesome Times
Spans of time that cause trouble with computers or are significant to computers.
Issues related to very large spans of time will be collected elsewhere.
Also see Troublesome Dates.
0.04 seconds
40 Milli Second
0.2 seconds
200 Milli Second
As currently implemented, there is a 200 millisecond ceiling on the time for which output is corked by TCP_CORK
5 minutes
5 Minute
Linux jiffies wrap 5 minutes after boot time
~11 minutes
9 * 75 Second == 675 Second == 11.25 Minute
tcp_keepalive_probes - INTEGER
How many keepalive probes TCP sends out, until it decides that the connection is broken. Default value: 9.
tcp_keepalive_intvl - INTEGER
How frequently the probes are sent out. Multiplied by tcp_keepalive_probes it is time to kill not responding connection, after probes started. Default value: 75sec i.e. connection will be aborted after ~11 minutes of retries.
~17 minutes
1 Mega Milli Second == 16.66 Minute
~53 minutes
1 Micro Century == 52.6 Minute
2 hours
tcp_keepalive_time - INTEGER
How often TCP sends out keepalive messages when keepalive is enabled. Default: 2hours.
~4.5 hours
2^31 / (6 * 22050 Hz) == 4.5 Hour
~11 hours
~4 days
100 Hour == 4.17 Day
~4-5 days
Retries continue until the message is transmitted or the sender gives up; the give-up time generally needs to be at least 4-5 days.
~6.2 days
2^29 Milli Second == 149.131 Hour == 6.2 Day
~11.5 days
1 million seconds
1 billion milli-seconds
~19.4 days
2^24 deci seconds
A 24-bit register, with clock ticks every 0.1 second, would overflow in less than 20 days
22 days
says 21 days: https://itwire.com/business-it-news/security/boeing-787-needs-a-reboot-every-21-days.html
unclear what the cause is
~24.9 days
2^31 milliseconds is about 24.9 days
probably means Integer.MAX_VALUE here, since Long.MAX_VALUE is not prime
40 days
Now, for a different real world example, Microsoft IIS (3.x, I think it was) had a bug where the date/time fields in W3C format log files would stop incrementing after only 40 days. Wonder what size counters they were using?
unclear
~49.7 days
2^32 milliseconds
~51 days
42 bit counter @ 1 Mhz = 50.9033 Day
2^32 @ 1024 MHz = 50.9033 Day
This is a great article:
A Reverse Engineer’s Perspective on the Boeing 787 ‘51 days’ Airworthiness Directive
and proposes
0x800000000000 @ 32 MHz = 50.9033 Day
~70 days
1700 Hour
could be: 102400 minutes
2^10 centi minutes
208.5 days
0xffffffffffffffff >> 10
Nano Second == 208.5 Days
In[1]:= BitShiftRight[16^^ffffffffffffffff, 10]/10^9/60/60/24 // N
Out[1]= 208.5
~212 days
5184 hours (suspiciously close to 2^64 picoseconds, but that is actually 5124.1 hours)
really close to 5124.1 + 60 ? But why would you need to add 60 hours?
In[226]:= 2^32*Second/230 Minute/(60 Second) Hour/(60 Minute) // N
Out[226]= 5187.16 Hour
In[200]:=
2^53*Second/(482.5 Mega) Minute/(60 Second) Hour/(60 Minute) // N
Out[200]= 5185.49 Hour
In[204]:=
2^64*Second/(1000000 Mega) Minute/(60 Second) Hour/(60 Minute) // N
Out[204]= 5124.1 Hour
213.5 days
It’s what you get when you have a 64 bit unsigned counter running at 1 THz.
~248 days
2^31 centi-seconds
2^31 ticks @ 100 Hz
~497 days
497 is the approximate number of days a counter will last if it is 32 bits, unsigned, starts from zero, and ticks at 100 Hz.
1.28 years
466 days
0xF0000000 * 10 Milli Second = 466.034 Day
~1.36 years
2^32 * 10 Milli Second == 497 Day
-
https://aussiestorageblog.wordpress.com/2011/05/05/497-the-real-number-of-the-it-beast/
-
https://www.cisco.com/c/en/us/support/docs/field-notices/631/fn63178.html
~2.3 years
828 Day
2^32 / (60Hz * 60sec * 60min * 24hr) = 828.5 days
~2.7 years
994 Day
~3.7 years
32768 Hour
~4.5 years
2^57 Nano Second == 40032 Hour
HPE releases urgent fix to stop enterprise SSDs conking out at 40K hours - https://news.ycombinator.com/item?id=22706968 - March 2020 (0 comments)
HPE SSD flaw will brick hardware after 40k hours - https://news.ycombinator.com/item?id=22697758 - March 2020 (0 comments)
Some HP Enterprise SSD will brick after 40000 hours without update - https://news.ycombinator.com/item?id=22697001 - March 2020 (1 comment)
HPE Warns of New Firmware Flaw That Bricks SSDs After 40k Hours of Use - https://news.ycombinator.com/item?id=22692611 - March 2020 (0 comments)
HPE Warns of New Bug That Kills SSD Drives After 40k Hours - https://news.ycombinator.com/item?id=22680420 - March 2020 (0 comments)
(there’s also https://news.ycombinator.com/item?id=32035934, but that was submitted today)
current theory:
2^53 * 62.5 MHz == 40032 hours
2^57 * 1 GHz == 40032 hours
2^63 * 64 GHz == 40032 hours
2^64 * 128 GHz == 40032 hours
~7.4 years
2^16 Hour == 2730.67 Day == 7.48 Year
~13.6 years
429496729.6 Second == 2^32 Deci Second
~15.84 years
500,000,000 seconds (15 years, 308 days, 53 minutes and 20 seconds)
~19.6 years
1024 Week == 10321920 Minute
~31.7 years
1 billion seconds
~36 years
It sound like that could equate to “Sat May 13 02:27:28 BST 2006”, or 1147483648 seconds since epoch, which makes it exactly 1,000,000,000 seconds until expiry of 32 bit time. Coincidence? Seems too strange as to a computer that is not a nice round number.
~110 years
~157 years
2^13 weeks
~1260.31 years
2^16 weeks