Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Aryia-Behroziuan/Philosophy-and-ethics
This section should include only a brief summary of another article. See Wikipedia:Summary style for information on how to properly incorporate it into this article's main text. (August 2020) Main articles: Philosophy of artificial intelligence and Ethics of artificial intelligence There are three philosophical questions related to AI[citation needed]: Whether artificial general intelligence is possible; whether a machine can solve any problem that a human being can solve using intelligence, or if there are hard limits to what a machine can accomplish. Whether intelligent machines are dangerous; how humans can ensure that machines behave ethically and that they are used ethically. Whether a machine can have a mind, consciousness and mental states in the same sense that human beings do; if a machine can be sentient, and thus deserve certain rights − and if a machine can intentionally cause harm. The limits of artificial general intelligence Main articles: Philosophy of artificial intelligence, Turing test, Physical symbol systems hypothesis, Dreyfus' critique of artificial intelligence, The Emperor's New Mind, and AI effect Alan Turing's "polite convention" One need not decide if a machine can "think"; one need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Turing test.[203] The Dartmouth proposal "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." This conjecture was printed in the proposal for the Dartmouth Conference of 1956.[204] Newell and Simon's physical symbol system hypothesis "A physical symbol system has the necessary and sufficient means of general intelligent action." Newell and Simon argue that intelligence consists of formal operations on symbols.[205] Hubert Dreyfus argues that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. (See Dreyfus' critique of AI.)[207][208]
https://github.com/Aryia-Behroziuan/Philosophy-and-ethics
Last synced: 3 days ago
JSON representation
This section should include only a brief summary of another article. See Wikipedia:Summary style for information on how to properly incorporate it into this article's main text. (August 2020) Main articles: Philosophy of artificial intelligence and Ethics of artificial intelligence There are three philosophical questions related to AI[citation needed]: Whether artificial general intelligence is possible; whether a machine can solve any problem that a human being can solve using intelligence, or if there are hard limits to what a machine can accomplish. Whether intelligent machines are dangerous; how humans can ensure that machines behave ethically and that they are used ethically. Whether a machine can have a mind, consciousness and mental states in the same sense that human beings do; if a machine can be sentient, and thus deserve certain rights − and if a machine can intentionally cause harm. The limits of artificial general intelligence Main articles: Philosophy of artificial intelligence, Turing test, Physical symbol systems hypothesis, Dreyfus' critique of artificial intelligence, The Emperor's New Mind, and AI effect Alan Turing's "polite convention" One need not decide if a machine can "think"; one need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Turing test.[203] The Dartmouth proposal "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." This conjecture was printed in the proposal for the Dartmouth Conference of 1956.[204] Newell and Simon's physical symbol system hypothesis "A physical symbol system has the necessary and sufficient means of general intelligent action." Newell and Simon argue that intelligence consists of formal operations on symbols.[205] Hubert Dreyfus argues that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. (See Dreyfus' critique of AI.)[207][208]
- Host: GitHub
- URL: https://github.com/Aryia-Behroziuan/Philosophy-and-ethics
- Owner: Aryia-Behroziuan
- Created: 2020-10-28T17:46:05.000Z (about 4 years ago)
- Default Branch: main
- Last Pushed: 2020-10-28T17:46:07.000Z (about 4 years ago)
- Last Synced: 2023-10-20T17:43:40.388Z (about 1 year ago)
- Size: 1000 Bytes
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome_ai_agents - Philosophy-And-Ethics - This section should include only a brief summary of another article. See Wikipedia:Summary style for information on how to properly incor… (Building / Ethics)
- awesome_ai_agents - Philosophy-And-Ethics - This section should include only a brief summary of another article. See Wikipedia:Summary style for information on how to properly incor… (Building / Ethics)
README
# Philosophy-and-ethics
This section should include only a brief summary of another article. See Wikipedia:Summary style for information on how to properly incorporate it into this article's main text. (August 2020) Main articles: Philosophy of artificial intelligence and Ethics of artificial intelligence There are three philosophical questions related to AI[citation needed]: Whether artificial general intelligence is possible; whether a machine can solve any problem that a human being can solve using intelligence, or if there are hard limits to what a machine can accomplish. Whether intelligent machines are dangerous; how humans can ensure that machines behave ethically and that they are used ethically. Whether a machine can have a mind, consciousness and mental states in the same sense that human beings do; if a machine can be sentient, and thus deserve certain rights − and if a machine can intentionally cause harm. The limits of artificial general intelligence Main articles: Philosophy of artificial intelligence, Turing test, Physical symbol systems hypothesis, Dreyfus' critique of artificial intelligence, The Emperor's New Mind, and AI effect Alan Turing's "polite convention" One need not decide if a machine can "think"; one need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Turing test.[203] The Dartmouth proposal "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." This conjecture was printed in the proposal for the Dartmouth Conference of 1956.[204] Newell and Simon's physical symbol system hypothesis "A physical symbol system has the necessary and sufficient means of general intelligent action." Newell and Simon argue that intelligence consists of formal operations on symbols.[205] Hubert Dreyfus argues that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. (See Dreyfus' critique of AI.)[207][208]