皮皮学,免费搜题
登录
搜题
【简答题】
In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a child. Even though Spooner screams "Save her! Save her! " the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah’s 11 percent. The robots decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make? Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics, which hold that 1. Robots cannot harm humans or allow humans to come to harm; 2. Robots must obey humans, except where the order would conflict with law 1; and 3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov's robots—they don't have to think, judge, or value. They don't have to like humans or believe that hurting them is wrong or bad. They simply don't do it. The robot who rescues Spooners life in I, Robot follows Asimovs zero law: robots cannot harm humanity (as opposed to individual humans) or allow humanity to come to harm—an expansion of the first law that allows robots to determine what's in the greater good. Under the first law, a robot could not harm a dangerous gunman, but under the zero law, a robot could kill the gunman to save others. Whether it's possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as "harm "is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations. Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It's doubtful that a computer program can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies ( 替身 ) called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both "die." The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what's best for humanity, especially if it can't calculate survival odds?
拍照语音搜题,微信中搜索"皮皮学"使用
参考答案:
参考解析:
知识点:
.
..
皮皮学刷刷变学霸
举一反三
【单选题】企业的直接材料、直接人工和制造费用预算编制的直接依据是( )。
A.
销售预算
B.
现金预算
C.
生产预算
D.
费用预算
【单选题】人生当务实,就是要从人生的实际出发,以科学的态度看待人生,以 的精神创造人生。()
A.
刻苦
B.
务实
C.
奋斗
D.
开拓
【单选题】企业的直接材料、直接人工和制造费用预算编制的根据是( )
A.
销售预算
B.
现金预算
C.
费用预算
D.
生产预算
【单选题】企业的直接材料、直接人工和制造费用预算是根据( )来确定的。
A.
销售预算
B.
成本预算
C.
生产预算
D.
现金预算
【多选题】1人生当务实就是要求我们要从人生的实际出发( )。
A.
以科学的态度看待人生
B.
以平常的心态看待生活
C.
以乐观的心态体味人生
D.
以务实的精神创造人生
【多选题】非货运问题交易包括: ()
A.
中差评交易
B.
DSR低分
C.
物品与描述不符纠纷
D.
卖家取消交易
【单选题】有一种土壤,阳离子交换量高而盐基饱和度低,那么,对这种土壤在保存有效养分(离子态)和酸碱性方面的评价为 。
A.
保存养分的潜力大,但实际保存的离子态有效养分并不高,土壤呈酸性
B.
保存养分的潜力大,实际保存的有效养分亦高,土壤呈中性
C.
保存养分的潜力小,实际保存的有效养分少,土壤呈酸性
D.
保存养分的潜力大,可供交换的养分离子含量很高,土壤呈酸性
【单选题】企业的直接材料、 直接人工和制造费用预算是根据(    )直接确定的
A.
销售预算
B.
成本预算
C.
现金预算
D.
生产预算
【多选题】(eBay,2019)非货运问题交易包括: ()
A.
中差评交易
B.
DSR低分
C.
物品与描述不符纠纷
D.
卖家取消交易
【简答题】人生当务实,就是要从人生的实际出发,以科学的态度看待人生,以 ( ) 的精神创造人生。
相关题目: