Artificial Intelligence (AI) from a philosophy-of-science perspective | Статья в журнале «Техника. Технологии. Инженерия»

Отправьте статью сегодня! Журнал выйдет 4 мая, печатный экземпляр отправим 8 мая.

Опубликовать статью в журнале

Автор:

Рубрика: Автоматика и вычислительная техника

Опубликовано в Техника. Технологии. Инженерия №4 (10) октябрь 2018 г.

Дата публикации: 14.09.2018

Статья просмотрена: 419 раз

Библиографическое описание:

Пердомо Салседо, Рамон Альфонсо. Artificial Intelligence (AI) from a philosophy-of-science perspective / Рамон Альфонсо Пердомо Салседо. — Текст : непосредственный // Техника. Технологии. Инженерия. — 2018. — № 4 (10). — С. 4-10. — URL: https://moluch.ru/th/8/archive/102/3491/ (дата обращения: 26.04.2024).



Internet of things (IoT) and personal analytics (PA) with multiple sensors and devices are generating at high velocity a plethora of disparate data defining a complex, heterogeneous landscape such as a human interconnected system. These high-dimensionality data are supplemented by unstructured data originating from social media activity, and with mobile devices proving to be valuable in daily life and cloud computing delivering heightened flexibility and increased performance in networking and data management, we are ideally positioned to marry soft computing methodologies to the traditional deterministic and interpretive approaches such as artificial intelligence (AI) applications in medicine whose purpose is focus on modeling and interpreting the human brain.

I do not pretend to start with precise questions. I do not think you can start with anything precise. You have to achieve such precision as you can, as you go along.

Bertrand Russell [1]

In his book philosophy of the internet, Ropolyi states: [2] Knowledge is a determinative factor of the culture of the modern age. A sharp antagonism towards beliefs of the middle ages and scholastic thinking, a commitment to developing a new worldview which is based on modern science and the rational construction of the skeptical and experimenting man are inseparably a part of the self-understanding of modernity. [2] It is an interesting and important fact that during the development and existence of the crisis situations, we can observe essential changes in the dominant information technologies of the ages. Printing appeared in Europe in the period of the crisis of faith, and during the unfolding of the crisis of knowledge, electronic information technology appears and becomes widely used, including the most characteristic technology of the age, the Internet, as a worldwide information network. It can be shown clearly that printing played an indispensable role in the unfolding of the reformation of the church, and we can rightly assume that the existence and peculiar usage of the Internet as a source of information will inevitably be necessary for the process of the reformation of knowledge. The internet derives its disruptive quality from a very special property: it is public. It’s public in several ways. The standard specifications that define the Internet are themselves open and public — free to read, download, copy, and make derivatives from. As a result, the analysis of the nature of the information sciences and its social and cultural role will be an important component of the reformation of knowledge.

The intention throughout the following pages is to address the approach inherent in the analysis of AI across the Computational and information-theoretic research in philosophy. As an engineer, I am tempted to ask: Are computations either necessary or sufficient for cognition? And to approach the subject of artificial intelligence (AI) from a perspective of philosophy of science. Trying to understand if the process of consciousness can be summarized how the analysis of the stored and available (remembered) information. Conditioned to parameters of logic under uncertainty with a specific personal value supported by our own experiences. By definition, AI is part of the interdisciplinary information sciences area that develops and implements methods and systems that manifest cognitive behavior. Main features of AI are learning, adaptation, generalization, inductive and deductive reasoning, and human-like communication. Some more features are currently being developed: Consciousness, self-assembly, self-reproduction, emotional (affective) computing, social networks, decision making among others.

Introduction

According to preliminary results by Gartner, Inc. Just for the year, 2017 laptops or personal computers shipments surpassed 262.5 million units. [3] Computer and Internet penetration varies significantly from region to region in the world. In North America, 95 percent of the population has access to the Internet, compared to just 35.2 percent in Africa. Asia, as a continent, has the largest overall number of Internet users (over one billion), making up around 54.4 percent of the world total. [4] Then, it is important to ask, What are the effects of computers on society? Computers have changed the way information is saved and accessed, [5] computers have changed warfar. Now people use computers to conduct war in many ways. computers have changed the way people relate to each other with their environment.

How this computer revolution and this appetite for capturing data and parameterizing information are changing our way of thinking? Let me start, by explaining how modern cognitive processes can occur. In 1960, Miller, Galanter and Pribram published Plans and the Structure of Behaviour, [6] founding manifesto of Cognitive Psychology and Information Processing (IP). They develop the mind-computer analogy that includes mentalist concepts such as «plans», «goals», «structures», «strategies». The analogy also allowed admitting that the brain is first of all a device capable of dealing with information, and not something that only serves to respond to certain types of stimuli. The recognition of this possibility opened the way for psychologists to investigate internal representations without having to resort to neurological reference frames. Modern cognitive models can be characterized as physical, mathematical, and empirical. In fact, recent developments in computational intelligence, in the area of machine learning in particular, have greatly expanded the capabilities of empirical modeling. The discipline that encompasses these new approaches is called data-driven modeling (DDM) and is based on analyzing the data within a system. One of the focal points inherent in DDM is to discover connections between the system state variables (input and output) without explicit knowledge of the physical behavior of the system. This approach pushes the boundaries beyond.

Some neuroscientists like Rodolfo Llinas [20] states the biological intelligence component of motricity requires, for its successful wheeling, a prediction imperative to approximate the consequences of the impending motion. Llinas [20] addresses how such predictive function may originate from the dynamic properties of neuronal networks. According to him prediction is the primordial function of the brain. [20] Thus the capacity to predict is most likely the ultimate brain function. One could even say that Self is the centralization of prediction.

Certainly, It could be said that this prediction is similar to that used by predictive models in engineering? Predictive models in engineering use known results to develop (or train or estimate) a model that can be used to predict values for different data. Descriptive models describe patterns in existing data that may be found in new data. With descriptive models, there is no target variable for which you are striving to predict the value. Most of the big payoff has been in predictive modeling when the models are operationalized in a real-world setting.

In their book Aristotle's Laptop: The Discovery of our Informational Mind Alexander and Morton [7] state that -virtual objects can exist as states of neural networks and that such objects can have just as vivid a character as any virtual creature in a virtual world created by an artist/programmer. However, neural network and the cells of a living brain have been in existence even before the advent of the programmed creature, some of these virtual objects are simply called the thoughts of the living organism-. About this, Dennett, [8] offers an account of how consciousness arises from interaction of physical and cognitive processes in the brain. The mind is a virtual object which emerges from a neural network. But what are virtual objects? What are they made of? The inevitable, but not immediately comprehensible, answer is that a virtual machine is informational. According to Aleksander and Morton's informational mind hypothesis [4], conscious minds are state structures that are created through iconic learning. Distributed representations of colors, edges, objects, etc. are linked with proprioceptive and motor information to generate the awareness of an out-there world. The uniqueness and indivisibility of these iconically learned states reflect the uniqueness and indivisibility of the world.

What is cognition? Contemporary orthodoxy maintains that it is computation: the mind is a special kind of computer, and cognitive processes are the rule-governed manipulation of internal symbolic representations. This broad idea has dominated the philosophy and the rhetoric of cognitive science-and even, to a large extent, its practice-ever since the field emerged from the postwar cybernetic melee. It has provided the general framework for much of the most well-developed and insightful research into the nature of mental operation.

In 2007 Gray declared [9]: ‘The world of science has changed, and there is no question about this. The new model is for the data to be captured by instruments or generated by simulations before being processed by software and for the resulting information or knowledge to be stored in computers. Scientists only get to look at their data fairly late in this pipeline. The techniques and technologies for such data-intensive science are so different that it is worth distinguishing data-intensive science from computational science as a new, fourth paradigm for scientific exploration. Data exploration arise like a new paradigm of the science.

Science Paradigms

  1. Thousand years ago: Science was empirical describing natural phenomena
  2. Last few hundreds years: theoretical branch using models, generalizations
  3. Last few decades: a computational branch simulating complex phenomena
  4. Today: data exploration (eScience) unify theory, experiment, and simulation

‒ Data captured by instruments or generated by simulator

‒ Processed by software

‒ Information/knowledge stored in computer

‒ Scientist analyzes database / files using data management and statistics.

People collect information at a cognitive and emotional level to make their life decisions, however, this process is not always done in a functional and complete way. It is not possible to think about the concepts of decision, freedom and choice without having a context that shapes them and molds them to the moments of people's experience. At the present time human existence is crossed and addressed by multiple conceptions of scientific knowledge (psychological, philosophical, anthropological and sociological, among others), all of them seek to give answers about the meaning that people give to their lives, as well as describe what is the way to go to find the answer to the existence and the meaning of life.

Some useful definitions

So far, we have approached the analysis of thought and compared it with artificial intelligence. but it is necessary to give some definitions that will allow us to continue making the comparative.

Artificial intelligence (AI), which is the overreaching contemplation of how human intelligence can be incorporated into computers.

Computational intelligence (CI), which embraces the family of neural networks, fuzzy systems, and evolutionary computing in addition to other fields within AI and machine learning.

Soft computing (SC), which is close to CI, but with special emphasis on fuzzy rules-based systems posited from data.

Machine learning (ML), which originated as a subcomponent of AI, concentrates on the theoretical foundations used by CI and SC.

Data mining (DM) and knowledge discovery in databases (KDD) are aimed often at very large databases. DM is seen as a part of a wider KDD. Methods used are mainly from statistics and ML.

Human-Data Interaction

In a very deep sense, we humans are information. [10] The Philosophy of information [8] as the new philosophical field concerned with (a) the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilization and sciences; and (b) the elaboration and application of information-theoretic and computational methodologies to philosophical problems. Philosophy of Information is a mature discipline for three reasons [10]. First, it represents an autonomous field of research. Second, it provides an innovative approach to both traditional and new philosophical topics. Third, it can stand beside other branches of philosophy, offering a systematic treatment of the conceptual foundations of the world of information and the information society.

The story of evolution from a matter-free universe origin to the information-processing brain/mind can be told in three major emergences:

  1. the first appearance of matter, some of it organized into information structures,
  2. the first appearance of life, information structures that create and transmit information by natural selection, variation, and heredity,
  3. the appearance of human minds, which create, store, and transmit information external to their bodies.

The Philosophy of Information [11] views the mind as the immaterial information in the brain. The brain is seen as a biological information processor. Mind is software in the brain’s hardware, although it is altogether different from the logic gates, bit storage, algorithms, computations, and input/output systems of the type of digital computer used as a «computational model of mind» by today's cognitive scientists. Big data is growing as an area of information technology, service, and science, and so too is the need for its intellectual understanding and interpretation from a theoretical, philosophical, and societal perspective. The Philosophy of Big Data [8] is the branch of the Philosophy of information concerned with the foundations, methods, and implications of big data; the definitions, meaning, conceptualization, knowledge possibilities, truth standards, and practices in situations involving very-large data sets that are big in volume, velocity, variety, veracity, and variability. Artificial neural networks (ANN), fuzzy logic (FL), and genetic algorithms (GA) are human-level artificial intelligence (AI) techniques currently being practiced in many aspects of our daily life. Data mining methodologies that underpin data-driven models are ubiquitous in many common activities, such as using a search engine, selecting a movie online, using an application to drive faster to home or simply checking the weather forecast.

The new human modes of interacting with data are those of exception, variability, probability, patterns, and prediction, which are not necessarily natural modes for humans. There are new kinds of information available for the first time such as very-deep micro-detail, longitudinal baseline measures, normal deviation patterns, contingency adjustments, anomaly, and emergence. In this context, humans can conceive of the relation to data as one of reality multiplicity given the different attunements of data analysis paradigms, for example those structured around time, frequency, episode, and cycle. There are new kinds of epistemic models that at minimum supplement and extend the traditional scientific method, such as deep learning, hierarchical representation, neural networks, and information visualization.

From early deductionism to deep learning machines.

Aristoteles (384–322 BC) was a pupil of Plato and teacher of Alexander the Great. He is credited with the earliest study of formal logic. Aristotle introduced the theory of deductive reasoning. Aristotetle introduced epistemology which is based on the study of particular phenomena which leads to the accumulation of knowledge (rules, formulas) across sciences: Physics, astronomy, chemistry, etc. According to Aristotle this knowledge was not supposed to change (becomes dogma). Aristotle’s sharp logic underpins contemporary science. The Aristotelian school of thought makes observations based on a bivalent perspective, such as black and white, yes and no, and 0 and 1. The nineteenth century mathematician George Cantor instituted the development of the set theory based on Aristotle’s bivalent logic and thus rendered this logic amenable to modern science. Named after the nineteenth-century mathematician George Boole, Boolean logic (BL) [12] is a form of algebra in which all values are reduced to either TRUE or FALSE. Boolean logic is especially important for computer science because it fits nicely with the binary numbering system, in which each bit has a value of either 1 or 0. Another way of looking at it is that each bit has a value of either TRUE or FALSE. Probability theory subsequently effected the bivalent logic plausible and workable. The German’s theory defines sets as a collection of definite and distinguishable objects. However, logic systems (LS) and rules are too rigid to represent the uncertainty In the natural phenomena; they are difficult to articulate, and not adaptive to change.

Human thought, logic, and decision-making processes are not doused in Boolean purity. We tend to use vague and imprecise words to explain our thoughts or communicate with one another. There is an apparent conflict between the imprecise and vague process of human reasoning, thinking, and decision making and the crisp, scientific reasoning of Boolean computer logic. This conflict has escalated computer usage to assist engineers in the decision making process, which has inexorably led to the inadequacy experienced by traditional AI or conventional rules-based systems, also known as expert systems.

This is where the term fuzzy logic (FL) appears, FL represents information uncertainties and tolerance in a linguistic form:

‒ Fuzzy rules, containing fuzzy propositions;

‒ Fuzzy interference

Fuzzy propositions can have truth values between true (1) and false (0). Fuzzy rules can be used to represent human knowledge and reasoning. However, fuzzy rules need to be articulated in the first instance, they need to change, adapt, evolve through learning, to reflect the way human knowledge evolves. Uncertainty as represented by fuzzy set theory is invariably due to either the random nature of events or to the imprecision and ambiguity of information we analyze to solve the problem. The outcome of an event in a random process is strictly the result of chance. Probability theory is the ideal tool to adopt when the uncertainty is a product of the randomness of events. Statistical or random uncertainty can be ascertained by acute observations and measurements.

Alan Turing (1912–1954) posed a question in 1950; Can machines think? Then it was formulated as “Can machines play imitation games?, known now as the Turing test for AI. It is a test of a machine`s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human, evaluated by a human. The Turing test has been both highly influential and widely criticised. However, it has become an important concept in the philosophy of artificial intelligence. The test though was too difficult to achieve without machine learning in an adaptative, incremental way.

Connectionism and Artificial Neural Networks

Connectionism [14] is a movement in cognitive science that hopes to explain intellectual abilities using artificial neural networks (also known as ‘neural networks’ or ‘neural nets’). Neural networks are simplified models of the brain composed of large numbers of units (the analogs of neurons) together with weights that measure the strength of connections between the units. These weights model the effects of the synapses that link one neuron to another. Experiments on models of this kind have demonstrated an ability to learn such skills as face recognition, reading, and the detection of simple grammatical structure. A single neuron is a very sophisticated information machine.

Philosophers have become interested in connectionism because it promises to provide an alternative to the classical theory of the mind: [14] the widely held view that the mind is something akin to a digital computer processing a symbolic language. Exactly how and to what extent the connectionist paradigm constitutes a challenge to classicism has been a matter of hot debate in recent years.

For much of the twentieth century the dominant paradigm of intelligence seated the mind in the brain; [15] thus, if computers can model the brain then, theory goes, it ought to be possible to program computers to act intelligently. In the latter part of the twentieth century this insight — that intelligence is grounded in the brain — fuelled an explosion of interest in computational “neural networks”: high fidelity accurate simulations of the brain (cf. ‘computational neuroscience’) or engineering approximations used to control intelligent machines (connectionism). However, the view that intelligence is rooted solely in the brain is a relatively modern one and one that, in recent years, is being challenged by embodied approaches to artificial intelligence; a perspective that, in turn, can be traced back to the classical era.

In the philosophy of mind a `theory of mind' typically attempts to explain the nature of ideas, concepts and other mental content in terms of the `cognitive states' of underlying `cognitive processes'. A cognitive state can thus encompass knowledge, understanding, beliefs etc. In a `representational theory of mind' the cognitive states are conceived in terms of relations to `mental representations' (which have content). In this view the underlying cognitive processes are simply understood in terms of `mental operations' on the mental representations. In Locke's `associationist theory of mind' this association of ideas — or associationism as it later became known — suggested that the mind is organized, at least in part, by principles of association and that items that `go together' in experience will `go together' in thought; subsequently David Hume refined Locke's generic notion of `going together by association' by reducing it to three core empirical principles: identity, contiguity in time and place, cause and/or effect.

Artificial Neural Networks (ANN) are computational models that mimic the nervous system in its main function of adaptive learning and generalization. Information philosophy views the mind as the immaterial information in the brain. The brain is seen as a biological information processor. Mind is software in the brain’s hardware, although it is altogether different from the logic gates, bit storage, algorithms, computations, and input/output systems of the type of digital computer used as a «computational model of mind» by today's cognitive scientists. The notion that phenomenal states, i.e., conscious states, may depend on subtle properties of neural networks came to the fore with a vengeance over the last decade or so years.

Mind as Immaterial Information in aBiological Information Processor

In ancient philosophy, mind/soul versus body was one of the classic dualisms, such as idealism versus materialism, the problem of the one (monism) or the many (pluralism), the distinction between essence and existence, between universals and particulars, between necessity and contingency, between eternal and ephemeral, but most important, the difference between the intelligible world of the noumena and the ​sensible world of mere appearances or phenomena.

Mind-body as a dualism coincides with Plato’s “Ideas” or “Forms” as pure form, with an ontology different from that of matter. The immaterial Forms, seen by the intellect (nous), allow us to understand the world. On the other hand, mind-body as a monism can picture both sides of the mind-body distinction as pure physicalism, since information embodied in matter corresponds simply to a reorganization of the matter. This was Aristotle’s more practical view. For him, Plato’s Ideas were mere abstractions generalized from many existent particulars. Form without matter is empty, matter without form is inconceivable, unimaginable. Kant rewrote this pre-Socratic observation somewhat obscurely as “Thoughts without content are empty, intuitions without concepts are blind.”

In a strictly comparative sense, without dealing with arguments of value, ethics or qualifiers. We can find similarities and differences between the human brain and computational processing. The table below shows some of these comparisons

Similarities [16]

Differences [17]

Both use electrical signals to send messages.

Synapses are far more complex than electrical logic gates

Both transmit information

The brain is a massively parallel machine; computers are modular and serial

Both have a memory that can grow

Short-term memory is not like Random Access Memory (RAM)

Both can adapt and learn.

The brain is a self-organizing system

Both have evolved over time.

Brains have bodies

Both need energy.

The brain uses content-addressable memory

Both can be damaged.

No hardware/software distinction can be made with respect to the brain or mind

Both can change and be modified.

Unlike computers, processing and memory are performed by the same components in the brain

Both can do math and other logical tasks

Processing speed is not fixed in the brain; there is no system clock

Both brains and computers are studied by scientists

Brains are analogue; computers are digital

The barrier of consciousness

Many people hold the view that, `there is a crucial barrier between computer models of minds and real minds: “the barrier of consciousness” and thus that computational connectionist simulations of mind (e.g. the huge, hi-_delity simulation of the brain currently being instantiated in Henry Markram's “Human Brain Project” — and “phenomenal (conscious) experiences” are conceptually distinct [18]. But is consciousness a prerequisite for genuine cognition and the realisation of mental states? Certainly Searle believes so, the study of the mind is the study of consciousness, in much the same sense that biology is the study of life [19] and this observation leads him to postulate a “connection principle” whereby, \... any mental state must be, at least in principle, capable of being brought to conscious awareness. Hence, if computational machines are not capable of enjoying consciousness, they are incapable of carrying genuine mental states and computational connectionist projects must ultimately fail as an adequate model for cognition.

How do the a priori arguments discussed herein accommodate the important results being obtained through computational neuroscience to cognition? There are two responses to this question. The first suggests that there may be principled reasons why It may not be possible to adequately simulate all aspects of neuronal processing via a computational system; there are bounds to a (Turing machine based) computational neuroscience. A second response emerges from the Chinese room and the Dancing with Pixies reductio. It acknowledges the huge value that the computational metaphor plays in current psychology and neuroscience and concedes that a future computational neuroscience may be able to simulate any aspect of neuronal processing and others insights into all the workings of the brain. However, although such a computational neuroscience will result in deep understanding of cognitive processes it insists on a fundamental ontological division between the simulation of a thing and the thing itself.

References:

  1. Russell, B; The Philosophy of Logical Atomism (1972), Fontana.
  2. Ropolyi, L; Philosophy of Internet, A discourse on the nature of internet (2013), E-learning scientific content evelopment in ELTE TTK.
  3. Gartner, Worldwide PC Shipments (cited January 11st 2018) Retrieved from https://www.gartner.com/newsroom/id/3844572
  4. Internet World Stats, internet usage statistics, The Internet Big Picture, World Internet Users and 2018 Population Stats. (cited march 27th 2018) Retrieved from https://www.internetworldstats.com/stats.htm
  5. What are the effects of computers on society? Retrieved from https://www.enotes.com/homework-help/effects-computers-society-103945
  6. Miller, G; Galanter, E; & Pribram, K; Plans and the structure of behavior (1960). University of Florida Libraries.
  7. Alexander, I; Morton, H; (2012). Aristotle's Laptop: The Discovery of our Informational Mind. World Scientific
  8. Dennett, D. Consciousness Explained (1991) Back Bay Books, Little, Brown, and company.
  9. Microsoft Research, The fourth paradigm, Data-Intensive Scientific discovery (2009). Microsoft
  10. The information philosopher, solving philosophical problems with the new information philosophy. Retrieved from http://www.informationphilosopher.com/mind/
  11. Floridi, L; The Philosophy of Information (2011), Oxford UK: Oxford University Press.
  12. Swan, M; Philosophy of Big Data, Expanding the human-data relation with Big Data Science services, (2015) Contemporary Philosophy, Kingston University London.
  13. Beal, V; Boolean Logic. Retrieved from https://www.webopedia.com/TERM/B/Boolean_logic.html
  14. Stanford Encyclopedia of Philosophy (cited may 18th 1997) Retrieved from https://plato.stanford.edu/entries/connectionism/
  15. Bishop, J; History and philosophy of neural networks (Paper 2015), Goldsmith, University of London.
  16. Chudler, E; Neuroscience for Kids, supported by a Science Education Partnership Award (R25 RR12312) from the National Center for Research Resources (NCRR). Retrieved from https://faculty.washington.edu/chudler/bvc.html
  17. Chatham, C; 10 importannt differences between brains and computers (Cited march 27th 2007) Retrieved from http://scienceblogs.com/developingintelligence/2007/03/27/why-the-brain-is-not-like-a-co/
  18. Torrance, S; Thin Phenomenality and Machine Consciousness, in R.Chrisley, R.Clowes and S.Torrance, (eds), (2005), Proc. 2005 Symposium on Next Generation Approaches to Machine Consciousness: Imagination, Development, Intersubjectivity and Embodiment, AISB05 Convention. Hertfordshire: University of Hertfordshire.
  19. Searle, J; The Rediscovery of the Mind, (1992), MIT Press, Cambridge MA. On computations and mind.
  20. Llinas, R; I of Vortex,fromneurons to self (2001). MIT Press,, Cambridge MA.
Основные термины (генерируются автоматически): ANN, DDM, FALSE, KDD, MIT, TRUE, ELTE, NCRR, RAM, TERM.

Похожие статьи

Software defined networking | Статья в журнале «Молодой ученый»

In our days, we need a big necessity to restructure the network infrastructure, because the configuring of large network is becoming very hard problem. That is why over the last five years, the Software Defined Networking (SDN) is the most discussed system at many conferences and forums.

Increasing the stability through the preprocessing anomalous objects in...

Основные термины (генерируются автоматически): интеллектуальный анализ данных, ANN, IOS, Ташкент. Ключевые слова. Устойчивость объекта, Аномальные объекты, Оценка сложности алгоритма.

The problem of synthesis of management by difficult system in the...

Wrong decomposition of the control tasks, excessive idealization of a complex process, linearization, discrimination, violation of the assumptions adopted in the derivation of the equations leads to inaccurate models [2, 3]. Mathematical description of systems in terms of apriori uncertainty.

Text Summarization with the use of Recurrent Neural Networks and...

Recurrent Neural Networks encoder and decoder systems for text summarization have been achieved high scores on short input or output texts. But for a long texts these systems often produce not coherent phrases, which often repeat themselves. In this paper I present my approach to this problem by...

Software testing – overview | Статья в журнале «Молодой ученый»

Section II gives an introduction to terms that are commonly used in the software testing context to have a terminology foundation for the rest of this work. Section III gives a brief survey on one of the most current software testing standards which is used to structure the rest of the covered topics.

The role of big data in cloud computing | Статья в журнале...

This is technically termed as big data, maintenance of which became inevitable now. For this, cloud computing is a viable solution.

Volume, variety, velocity, veracity (The term introduced by IBM means reality and trustworthy) are four characteristics of big data. The first two are discussed so far to some...

Study of Functional Efficasy of Seven-Transmembrane Receptors...

Библиографическое описание: Новиков Г. В., Сивожелезов В. С., Богдан Т. В., Алексеев Е. С., Golebiowski J. Study of Functional Efficasy of Seven-Transmembrane Receptors throught the Molecular Simulation and Structural Bioinformatics // Биоэкономика и экобиополитика. — 2015. — №1...

Похожие статьи

Software defined networking | Статья в журнале «Молодой ученый»

In our days, we need a big necessity to restructure the network infrastructure, because the configuring of large network is becoming very hard problem. That is why over the last five years, the Software Defined Networking (SDN) is the most discussed system at many conferences and forums.

Increasing the stability through the preprocessing anomalous objects in...

Основные термины (генерируются автоматически): интеллектуальный анализ данных, ANN, IOS, Ташкент. Ключевые слова. Устойчивость объекта, Аномальные объекты, Оценка сложности алгоритма.

The problem of synthesis of management by difficult system in the...

Wrong decomposition of the control tasks, excessive idealization of a complex process, linearization, discrimination, violation of the assumptions adopted in the derivation of the equations leads to inaccurate models [2, 3]. Mathematical description of systems in terms of apriori uncertainty.

Text Summarization with the use of Recurrent Neural Networks and...

Recurrent Neural Networks encoder and decoder systems for text summarization have been achieved high scores on short input or output texts. But for a long texts these systems often produce not coherent phrases, which often repeat themselves. In this paper I present my approach to this problem by...

Software testing – overview | Статья в журнале «Молодой ученый»

Section II gives an introduction to terms that are commonly used in the software testing context to have a terminology foundation for the rest of this work. Section III gives a brief survey on one of the most current software testing standards which is used to structure the rest of the covered topics.

The role of big data in cloud computing | Статья в журнале...

This is technically termed as big data, maintenance of which became inevitable now. For this, cloud computing is a viable solution.

Volume, variety, velocity, veracity (The term introduced by IBM means reality and trustworthy) are four characteristics of big data. The first two are discussed so far to some...

Study of Functional Efficasy of Seven-Transmembrane Receptors...

Библиографическое описание: Новиков Г. В., Сивожелезов В. С., Богдан Т. В., Алексеев Е. С., Golebiowski J. Study of Functional Efficasy of Seven-Transmembrane Receptors throught the Molecular Simulation and Structural Bioinformatics // Биоэкономика и экобиополитика. — 2015. — №1...

Задать вопрос