Sunday, July 1, 2018

谈近视眼的预防

本文转自枪友网原创文章。原文在一个内部论坛(http://qiangyou.org/bbs/forum.php?mod=viewthread&tid=9033),只有注册用户可以阅读,所以另外转发于此,以方便转发引用。 

把下一代培养成神枪手的第一步:谈近视眼的预防

http://qiangyou.org/bbs/forum.php?mod=viewthread&tid=9033

本帖最后由 turbopascal 于 2013-12-16 14:49 编辑 

要想成神射手,一个重要的前提是眼睛有好的视力(好吧,许海峰、王义夫都是近视,但如果他们不近视的话说不定金牌拿得更多,嘿嘿,这对本文是题外话)。我们这一代多数已经永久性近视,但是要把下一代培养成神枪手,第一步做到不近视不散光,对很多枪友还是完全来得及的。说完这些废话,跟射击扯上了一点儿边,再看在我给大家发子弹的份上,希望版主能把本帖子保留,嘿嘿。

本文由turbopascal原创,欢迎转帖。虽然文字是原创,但理论来自《近视眼之迷:近视的真相及预防》[1]。

主题:怎么预防近视眼

1. 近视是怎么形成的?

人体最牛的地方是后天的可塑性。既可以训练成占旭刚那样的力量型肌肉;也可以训练成张继科、林丹那样的灵敏速度型;甚至骨头也可以象八极大师李书文那样练得骨硬如铁。这些可塑性的结果是让人对最经常做的动作能更加容易地做到。眼睛也不例外。

首先让我们定义眼睛的远点和近点。远点是眼睛最放松时能看清的最远距离。近点是眼睛最紧张时能看清楚的最近距离。正常人的远点是无穷远,所以除非物体太小,其在视网膜上成的像小于一个感光细胞,都是能看清的。

当一个人长期看近处的东西时(学生读书、做作业、看电脑等),眼睛晶状体周围的平滑肌必须长期保持紧张,最后造成痉挛而无法放松成原来状态,这就是假性近视,经过休息后会恢复。如果人必须继续长期看近处的东西,眼睛的可塑性就开始起作用:眼球长得更长,以便使平滑肌不需要用力把晶状体变太鼓的情况下仍然能够成像在视网膜上。眼球一旦变长,近视就变成了永久近视。如果一个人长期在40厘米处看书,最后他的远点将会变成40厘米。这时,让他看5米处的黑板,他当然看不清。

2. 近视跟人种、遗传有关系吗?

近视跟遗传基因肯定有关系因为近视不是百分之百发生。有些人看书非常多,但从不近视,说明他们的基因与别人不同。但对不同人种来说,尚没有证据证明一个人种(比如亚洲人)比其他人种更容易近视。总得来说,看近处时间越多,近视的比例越高。

但遗传基因对本文没有意义。比如,很多人拼命吸烟几十年,最后有几个人的遗传基因超牛,没有得吸烟造成的肺癌等疾病。但是这有什么关系呢?如果大家都不吸烟,不就没有人得病了吗?

3. 近视以后怎么办?

近视之后眼球太长,晶状体把物体的像聚焦在视网膜之前,所以上课看不清黑板,开车看不清路牌。通用的解决办法是戴凹透镜眼镜。凹的镜片降低了晶状体的凸的程度,使物体正好成像在视网膜上,就看清了。换句话说,近视眼镜把人的远点重新调整成无穷远。这时看近处时眼睛发现又需要紧张,于是只好把眼球长得更长。最后戴近视眼镜看近处使眼睛更加近视。隐形眼镜造成的效果也是一样的。散光也有类似的效果,只是使眼球各个角度不是同样地变长,比如纵轴比横轴更长。近视镜会把近视和散光永久性地固定下来,并且进一步加深。所以近视后可以戴眼镜,但永远不要在看近处时戴眼镜。

睡觉戴的硬隐形眼镜(Ortho-K)是把角膜压扁一下,就好像角膜上有了一个近视镜。在眼睛里放东西很容易引起其他的副作用。比如我儿子以前用这种眼镜就几次出现眼球轻度划伤、多次眼睛发干、滴各种不知名的药水。我一个朋友的孩子戴这种眼睛最近眼睛疼痛发炎,相当危险。

激光手术对角膜有永久性的改变。对视力还在变化的孩子绝对不能用。对成人是否能用建议要慎重考虑,因为有一些人有严重的副作用[2]。

4. 近视能预防吗?

从近视眼形成的原理我们可以看到,只要让眼睛不长期处于看近处的状态,就不会近视。但是在现代社会,不长期看近处是不可能的。所以办法是通过眼镜把眼睛的远点调整成看书的距离,从而使眼睛一直在放松的状态下读书学习。

先解释一下眼镜的度数。物理上来说,度数是镜片焦距的倒数。近视镜是凹透镜,度数为负数。因为我们讨论的多数是近视镜,论述中经常把负号省略。100米除以眼睛的远点的距离就得到度数(100米÷50厘米=200度),同样,100米除以近视度数得到眼睛的远点的距离。大家可以这样验证:摘下眼镜看一本书,调整书的距离找出你看到字的边缘刚刚有点模糊时书离眼睛的距离,然后用上面的公式算出你现在眼睛的度数,应该跟你的眼镜的度数非常接近。

度数是可以叠加的。100度近视镜和50度近视镜放在一起,效果是150度近视镜。100度远视镜(是凸透镜,又叫阅读镜或老花镜,度数为正)和250度近视镜(度数有负号)放在一起,效果是150度近视镜。

正常眼睛的远点是无穷远,戴上一个300度的阅读镜后,远点变成了33厘米,也就是说,他的眼睛变近视了,近视度数是300度。当他戴着300度的老花镜在33厘米处读书,眼睛就会一直处在最放松状态,永远不会近视。

已经近视的孩子戴着比他度数浅300度的镜子在33厘米处读书,眼睛也会一直处在最放松状态,近视也永远不会加深。比如:300度近视不戴眼镜,400度近视戴100度近视镜,200度近视戴100度老花镜。

一个总是在33厘米处读书的人如果永远不戴眼镜,他的度数最多就是300度,这就是为什么在没有发明近视眼镜的古代不存在高度近视。

那么有必要预防近视吗?我让孩子戴眼镜不就行了吗?其实近视是很不好的。每近视300度,眼球大约变长一毫米。人的眼球虽然可以拉长,但视网膜弹性差得多。度数越深,视网膜强度越差,也就越容易造成视网膜脱落!

孩子年代眼球容易变长,所以容易近视。长大后这种变化明显变缓,多数人的近视不再加深。所以在孩子阶段预防近视是极为重要的。

5.孩子预防近视的具体操作

前提条件是孩子要保持正确的读书、学习的姿势(坐正、眼睛不能离书本太近),否则下面怎么做也是不够的。

首先灯光要亮。灯光越亮,瞳孔越小,就好像眯着眼一样,使成像更清楚。

其次要测量孩子读书时眼睛离书的距离,距离决定度数。如果在33厘米处读书,要戴当前度数加300度的眼镜。不要忘记近视镜度数是负数,比如500度近视眼要用200度近视镜作为阅读镜;如果在40厘米处读书,要戴当前度数加250度的眼镜。

然后阅读镜不要配散光。如果有100度散光,可以当成50度的近视度数。这是为了不把散光固定在眼球的形状上。

另外阅读镜还要用更短的瞳距。阅读镜在读近处时使用,这时瞳距比正常平视时要短2-3毫米。

还有,孩子上学时应该配双光镜(Bifocal),其中上半部分是正常的近视镜,下半部分是阅读镜。这样上课时抬眼看黑板时用正常近视镜,低头写作业时用阅读镜部分。

最后,如果孩子经常要用电脑,不要用双光镜,因为看电脑屏幕时眼睛是抬起来的,所以应该按照上面所说的度数配一副普通阅读镜。

我给自己配了浅200度的阅读镜,平时上班用。头几天感觉不习惯,但现在非常喜欢,感觉眼睛不累。我儿子一个月前配了双光镜,其中阅读部分浅250度。因为他近视度数已经超过300度,读书时不戴眼镜就足够了,不需要另配阅读镜。

戴阅读镜时重要的一点是调整距离使眼睛刚刚能看清字(或者让字稍微有一点儿模糊)。这样就强迫眼球处在最放松的状态。

6.为什么眼医不告诉我这些?

几十年前学术界就已经有很多研究,也有一些文章证明了双光镜的有效性。但是让这样一个巨大的产业产生根本性的改变需要更多的研究数据的证明、更多机构的参与等多种因素。《近视眼之迷:近视的真相及预防》[1]列举了很多权威的眼科机构阻止大家知道真相的一些行动,有兴趣的可以看看。不过这些年已经有越来越多的眼医知道双光镜对减缓近视加深的作用。就连一个眼科的专业网站在对此书的作者进行谩骂和痛批时[3],我看到的也只是批评他网站上的照片,以及有些因素书里没有解释,并且在这样一个讨论里,也有眼医同意用双光镜减缓近视加深速度。

搜索“雾视疗法”也可以找到中文版的对这种预防近视的方法的介绍。

参考文献

[1] The Myopia Myth: The Truth About Nearsightedness and How to Prevent It.  http://preventmyopia.org/thebook.html
[2] LASIK Complications, Risks.  http://www.lasikcomplications.com
[3] Evil eye doctors and myopia. http://goo.gl/vNWxnd

Thursday, August 30, 2012

Limiting SSD Performance to Prolong Its Life?


In FAST 2012, there is a paper "Lifetime Management of Flash-Based SSDs Using Recovery-Aware Dynamic Throttling".  This work is done by some researchers in Seoul.  It tries to meet SSD's life time guarantee by limiting its write performance so it does not wear out before a predefined number of years.
SSD has limited life time, but SSD in enterprise storage systems should guarantee a certain life time.  Some vendors limit the maximum throughput of SSD so that SSD's life can meet the requirement.  This paper proposes several dynamic throttling algorithms so it does not need to limit peak throughput if there are idle times in the workload.  Such algorithms remind me that some network switches have similar algorithms to limit network bandwidth of some streams for fairness to other streams; also some VBR video encoding algorithms prefetch big frames when the current bit-rate is small to reduce its maximum bandwidth requirement.
Maybe I missed something, but I think increasing the life time of SSD by limiting its performance is a questionable direction for solving this problem.  The point of using SSD is that it has high performance.  Limiting its performance seems to defeat this purpose.  Writing to SSD is the normal wear and tear, and should not be limited, just like car manufactures should not limit the car's running speed to guarantee a car can work for a certain number of years.  If the application needs to write one giga-byte of data, no matter how slow it is, it got to write that amount.  Who gets the benefit if the storage system delays the writes?  You waste the applications' running time and power so that you can let the SSD keep working until it is out of the warranty period?  I think a more meaningful problem in this area could be how to automatically migrate data off an about-to-wear-out SSD and notify admin to replace it.

Friday, April 2, 2010

Use sandbox to prevent runaway cars?

I read yet another news story of runaway cars from Toyota today.  All these sad stories about runaway cars make me think seriously about software bugs.
As a software engineer, my daily job is to create bugs (as a by-product of writing code) and fix (hopefully a large percent of) them.  When my code is shipped to millions of customers, inevitably and sadly, it still has bugs.  The cost could be customers losing time restarting computer or losing data.
However, for the software used in cars, the cost could be losing people's lives!  Finding and fixing all bugs in complex software is extremely hard, if not impossible.  I hope one day we could prove the correctness of any software (see L4 kernel).  But before that day comes, we still have to deal with software bugs.
There are news saying that the cars recalled and repaired by Toyota can still runaway.  This could imply that Toyota didn't really fix the problem.  Here are possible reasons of the problem that I can think of:
  1. Driver's problem.  If the driver pushes pedal to the end, the car will runaway.  This is the most common cause of runaway cars, but is not the case in several Toyota accidents.
  2. Hardware problem.  I find it hard to believe floor mat could stuck the gas pedal, because the floor mat is very sticky to the floor and not likely be pushed over, and it is not that hard to be able to hold gas pedal all the way down.  Is it possible that the gas pedal just sticks itself?  Since I never heard stories about sticky gas pedals before software is widely used in car's control, this sounds not very likely too.
  3. Software problem.  I think this is the most possible reason.
I don't know what the control software in cars looks like.  But it should be easy to add a check before sending command to push gas pedals: if break is pressed, cancel the gas pedal command.  However, if there is a memory corruption bug and the code doing this check is corrupted, this check will have no use.
So I think a sandbox could be used as a way of protection.  If the control software runs inside of the sandbox, the sandbox could perform this check and reject unreasonable commands sent by the control software, and restart the control software to reset its status.  In this way, no matter what bugs happened inside of the sandbox, unreasonable combination of commands will never happen.

Thursday, March 11, 2010

Are you smarter than a compiler?

I recently read a survey about C compilers by Felix von Leitner published in Linux Kongress 2009, and was surprised by how smart modern C compilers are.
For example, to calculate the absolute value of a 64-bit signed integer, you need:
    x > 0 ? x : -x
The compiler optimizes into branchless code:
    long tmp = x >> 63;
    return (tmp ^ x) - tmp;
Since a mispredicted branch cost 10 cycles, and CPU could issue 4 instructions per cycle, such a branchless variation is a lot faster but much less readable by human.
I still remember the days where I use "++i" instead of "i++", and use "x>>2" instead of "x / 4".  But such kind of "optimizations" do not matter any more because of advancements in compiler technologies.
In most cases, it is better for us to spend time making the code more readable than getting the last bit of performance from the hardware.
Remember, readable code may be as fast as "optimized" code, and more importantly, readable code is more likely to be correct!

Sunday, March 7, 2010

FAST 2010 Impressions: "Technology for Developing Regions"

The annual FAST conference is an excellent conference in file and storage system technologies.  I attend this conference every year and really like it.  I want to share with you the first key note on FAST 2010: " Technology for Developing Regions" by Professor Eric Brewer from UC Berkeley.

Professor Brewer and his team went to Africa and India and researched technologies to help local education, health care, and preserve culture.  Here are some interesting points from the talk.

The cell phone is a much bigger market than PC, and Africa is the fastest growing region of cell phones, although it has only 10% coverage.  Many ladies in Africa buy cell phones and rent the minutes out to local people since there are no land lines.  This is a very profitable business for them.  Interestingly, lots of money supporting the use of cell phones are money mailed into Africa from Africa people working outside of Africa.

There are 6000 languages in Africa, but sadly, most are dying, because there are no storage or technology to record them.  Their local radio stations also do not record their aired programs because they don't have enough storage.  Brewer's team provided storage technologies for them as well as recorded education materials for local schools.  They ship DVDs there and use SMS to apply small updates.  Shipping DVDs is still the cheapest way to transfer large amount of data with good bandwidth. They have TierStore, a mostly disconnected distributed file system as the storage technology for them.

Brewer's team was also helping rural India to build up infrastructure for telemedicine.  They used WiFi network as the long range communication method, which was the cheapest technology they found.  As long as one point could see another one, a WiFi connection of several Mbps throughput could be established.  They had made a world record length of WiFi transmission: 382km! One side of the network was on a mountain, because earth is round and you cannot see far enough on the rounded ground of earth.  These WiFi network was used to let the hospitals reach more patients.  The doctor interviewed the patients through the network.  This had worked quite well, and over 25,000 patients recovered sight through eye hospitals.

Without listening to his talk, I would never imagine that one of the biggest challenges to build such an infrastructure was -- Power Supply, such a 99.999% reliable and taken-for-granted thing in developed countries! Brewer's team found that the voltage of the power line could go from much lower than 220v to high spikes of 500-1000V.  As a result, they had lost over 50 power adapters and some equipments because of these spikes.  I wonder how could such high voltage ever reach the equipment without damaging equipments in the middle?!