您好,欢迎来到爱go旅游网。
搜索
您的当前位置:首页Understanding control at the knowledge level

Understanding control at the knowledge level

来源:爱go旅游网
Understanding Control at the Knowledge Level

B. Chandrasekaran

Laboratory for AI ResearchThe Ohio State UniversityColumbus, OH 43210

Email: chandra@cis.ohio-state.edu

Abstract

What is it that unifies the control taskin all its manifestations, from the ther-mostat to the operator of a nuclearpower plant? At the same time, how dowe explain the variety of the solutionsthat we see for the task? I propose aKnowledge Level analysis of the taskwhich leads to a task-structure for con-trol. Differences in availability ofknowledge, the degree of compilationin the knowledge to map from observa-tions to actions, and properties requiredof the solutions together determine thedifferences in the solution architec-tures. I end by discussing a number ofheuristics that arise out of the Knowl-edge Level analysis that can help in thedesign of systems to control the physi-cal world.

What is the Knowledge Level?

By now most of us in AI know about theKnowledge Level proposal of Newell [Newell,81]. It is a way of explaining and predicting thebehavior of a decision-making agent withoutcommitting oneself to a description of themechanisms of implementation. The idea is toattribute to the agent a goal or set of goals andknowledge which together would explain itsbehavior, assuming that the agent is abstractly arational agent, i.e., one that would apply an itemof relevant knowledge to the achievement of agoal. Imagine the following conversation be-tween a guest and a host at a house party:

G: Why does your cat keep going into thekitchen again and again?

H: Oh, it thinks that the food bowl is stillin the kitchen. It doesn’t know I justmoved it to the porch.

The host attributes to the cat the goal of satisfy-ing its hunger, and explains its behavior by pos-iting a (mistaken) piece of knowledge that the

food is in the kitchen. That the cat would natu-rally go to the kitchen under these conditionsseems reasonable to the host and presumably tothe guest. When people talk this way, they arenot asserting that the neural stuff of the cat issome kind of a logical inference machine work-ing on Predicate Calculus expressions. It issimply a useful way of setting up a model ofagents and using the model for explaining theirbehavior. The attributions of the goal andknowledge can be changed on the basis of fur-ther empirical evidence, but the revised modelwould still be in the same language of goals andknowledge.

The Knowledge Level still needs a representa-tion, but this is a representation that is not pos-ited in the agent, but one in which outsiders talkabout the agent. Newell thought that logic wasan appropriate representation for this purpose,leaving open the possibility of other languagesalso being appropriate in some circumstances.Newell used the phrase “Symbol Level,” to referto the representational languages actually usedto implement artificial decision agents (or ex-plain the implementation of natural agents).Logic-based languages, Lisp and FORTRAN,neural net descriptions, and even Brook’s sub-sumption architectures are all possible Symbol-Level implementations for a given KnowledgeLevel specification of an agent.

The Control Problem at the

Knowledge Level

Consider the following devices and controlagents: the thermostat, the speed regulator foran engine, an animal controlling its body duringsome motion, the operator of a nuclear powerplant, the president and his economic advisorsduring the task of policy formulation to controlinflation and unemployment, and a major corpo-ration planning to control its rapid loss of mar-ket share. All these systems are engaged in a“control” task, but seem to use rather different

control techniques and architectures. Is thissimilarity in high-level description just a conse-quence of an informal use of words in our natu-ral language, or is it an indication of some im-portant structural similarity that can have usefultechnical consequences? Formulating the controlproblem at the Knowledge Level can help us tosee what makes these problems similar, and, atthe same time, to explain the widely divergentimplementations of the systems that I listed.Fig. 1 is a brief description of the control prob-lem at the Knowledge Level.

Control Agent C,

System to be controlled S, statevector s, goal state G, defined as awff of predicates over compo-nents of s,

Observations O,Action repertoire A,

The Task: Synthesize action se-quence from A such that S reachesG, subject to various performanceconstraints (time, error, cost, ro-bustness, stability, etc.)

In fact, control in this sense in-cludes a good part of the generalproblem of intelligence.

Fig. 1. The control task at the Knowledge Level In order to see how different control tasks dif-fer in their nature, thus permitting differenttypes of solutions, we need to posit a task struc-ture [Chandrasekaran, et al, 1992] for it. Thetask structure is a decomposition of the task intoa set of subtasks, and is one plausible way toaccomplish the goals specified in the task de-scription. Fig. 2 describes a task structure forcontrol.

Basically, the task involves two importantsubtasks: using O, model S and generate a con-trol response based on the model. Both of thesetasks could use as their subtask a predictioncomponent. Typically, the modeling task woulduse prediction to generate consequences of thehypothesized model and check against reality toverify the hypothesis. The planning componentwould use prediction to check whether the plan

would generate the intended behavior given themodel.

l Common subtasks:

Build a model of S using O

(The general version of the problemis abductive explanation: from per-ception to scientific theory formationare examples of this.) The task mightinvolve prediction as a subtask.

» Create a proposed plan to move Sto G

(The general version of the problemis one of synthesis of plans.)

» Predict behavior of S under plan

using model

(In general, simulation to analysismay be invoked.)

» Modify plan

Fig. 2. The task-structure of control Every control system need not do each of thetasks in Fig. 2 explicitly. It is hard to build ef-fective control systems which do not use somesort of observation to sense the environment, orwhich do not make the generation of the controlsignal or policy depend on the observation.Many control systems can be interpreted asperforming these tasks implicitly. The predic-tion subtask may actually be skipped altogetherin certain domains.

The variety of solutions to the control problemarises from the different assumptions and re-quirements under which different types of solu-tions are adopted for the subtasks, resulting indifferent properties for the control solution as awhole.

As I mentioned, at one extreme, the subtasksmay be done implicitly, or in a limited way thatthey only work under certain conditions. At theother extreme, they may also be done with ex-plicit problem solving by multiple agentssearching in complex problem spaces. And ofcourse there are solutions of varying complexityin between.

The Thermostat, the Physician and theNeural Net Controller

Consider a thermostat (C in Fig. 1) For thissystem, S is the room, s consists of a single state

variable, the room temperature, G, the goal stateof S, is the desired temperature range, O is thesensing of the temperature by the bimetallicstrip, and A consists of actions to turn on and offthe furnace, the air-conditioner, the fan, etc. The modeling subtask is solved by directlymeasuring the variable implicated in the goalpredicate. The model of the environment issimply the value of the single state variable, thetemperature, and that in turn is a direct functionof the curvature of the bimetallic strip.

The curvature of the strip also directly deter-mines when the furnace will be turned on andoff. Thus the control generation subtask issolved in the thermostat by using a direct rela-tion between the model value and the action.The two subtasks, and the task as a whole, arethus implemented by a direct mapping betweenthe observation and the action.

Because of the extreme simplicity of the waythe subtasks are solved, the prediction task,which is normally a subtask of the modeling andplanning tasks, is skipped in the thermostat. The control architecture of the thermostat iseconomical and analysis of its behavior is trac-table. But there is also a price to pay for thissimplicity. Suppose the measurement of tem-perature by the bimetallic strip is off by 5 deg.The control system will systematically malfunc-tion. A similar problem can be imagined forthe control generation component. A largercontrol system consisting of a human problemsolver (or an automated diagnostic system) inthe loop may be able to diagnose the problemand adjust the control behavior. This approachincreases the robustness of the architecture, butat the cost of increased complexity of the mod-eling subtask.

Now consider the task facing a physician (C):controlling a patient’s body (S). Various symp-toms and diagnostic data constitute the set O.The therapeutic options available constitute theset A. The goal state is specified by a set ofpredicates over important body parameters, suchas the temperature, liver function, heart rate, etc. Consider the model-making subtask. This isthe familiar diagnostic task. In some instancesthis problem can be quite complex, involvingabductive explanation building, prediction, andso on. This process is modeled as problem spacesearch. The task of generating therapies is usu-ally not as complex, but could involve plan in-stantiation and prediction, again tasks that arebest modeled as search in problem spaces.

Why can’t the two subtasks, modeling andplanning, be handled by the direct mappingtechniques that seem to be so successful in thecase of the thermostat? To start off, the numberof state variables in the model is quite large, andthe relation between observations and the modelvariables is not as direct in this domain. It ispractically impossible to so instrument the bodythat every relevant variable in the model can beobserved directly. With respect to planning, thecomplexity of a control system that maps di-rectly from symptoms to therapies − some sort ofa huge table look-up − would be quite large. Itis much more advantageous to map the observa-tions to equivalence classes − the diagnosticcategories − and then index the planning actionsto these equivalence classes. But doing all ofthis takes the physician far away from thestrategies appropriate for the thermostat.

As a point intermediate in the spectrum be-tween the thermostat, which is a kind of reflexcontrol system, and the physician, who is a de-liberative search-based problem solver, considercontrol systems based on PDP-like (or othertypes of) neural networks. O provides the inputsto the neural net, and the output of the netshould be composed from the elements of the setA. Neural networks can be thought of as sys-tems that select, by using parallel techniques, apath to one of the output nodes that is appropri-ate for the given input. The activity of evenmultiply layered NN’s can still be modeled as aselection of such paths in parallel in a hierarchyof pre-enumerated and organized search spaces.This kind of connection finding in a limitedspace of possibilities is why these networks arealso often called associative. During a cycle ofits activity, the net finds a connection betweenactual observations and appropriate actions.This behavior needs to be contrasted with amodel of deliberation such as Soar [Laird, et al,1987] in which the essence of deliberation isthat traversing a problem space and establishingconnections between problem spaces are them-selves subject to open-ended additional problemsolving at run time.

The three models that we have considered sofar − the thermostat, neural net controllers, anddeliberative problem search controllers − can becompared along different dimensions as follows.

Speed

Robustness

tractability*

Reflex

fastloweasy

NN’smediummedium

mediumDelib.engines

slow

potentiallyhigh**

low

*: tractability of analysis

**: depending on availability of knowledgeTable 1: Tradeoff between different kinds ofcontrol systems along different dimensions By robustness in Table 1 I mean the range ofconditions under which the control systemwould perform correctly. The thermostat is un-able to handle the situation where the assump-tion about the relation between the curvature ofthe strip and the temperature was incorrect.Given a particular body of knowledge, delibera-tion can in principle use the deductive closure ofthe knowledge base in determining the controlaction, while the other two types of control sys-tems in the Table use only knowledge within afixed length of connection chaining. Of course,any specific implementation of problem spacesearch may not use the full power of deductiveclosure, or the deductive closure of the knowl-edge available may be unable in principle tohandle a given new situation.

The control systems in Table 1 are simply threesamples in a large set of possibilities, but se-lected because of their prominence in biologicalcontrol models. The reflex and the NN modelsmore commonly used in the discussion of ani-mal behavior and human motor control behav-ior, while deliberation is generally restricted tohuman control behavior where problem solvingplays a major role. Engineering of control sys-tems does not need to be restricted to these threefamilies. Other choices can be made in thespace of possibilities, reflecting different degreesof search and compilation.

Sources of Power

I have been involved, over the last severalyears, in the construction of AI-based processcontrol systems and also in research on causalunderstanding of devices. I have also followedthe major trends in both control systems theory− much of which carried on in a mathematicaltradition that different from the one that isprevalent in AI − and attempts to understandbiological control. My own research has beenmotivated by trying to understand some of thepragmatics of human reasoning in prediction,causal understanding and real-time control. I

have catalogued − and I will be discussing inthe rest of the paper − a set of heuristics that Icharacterize as sources of power that biologicalcontrol systems use. These ideas can also beused in the design of practical control systems,i.e., they are not intended just as explanations ofbiological control behavior. I do not intend themto be an exhaustive list, but as examples of heu-ristics that may be obtained by studying the phe-nomenon of control at an abstract level. Theseheuristics do not depend on what implementa-tion approaches are used in the actual design −be they symbolic, connectionist networks orfuzzy sets.

Integrating Modules of Different Types

We have identified a spectrum of controllers: atone end are fast-acting controllers, but with verycircumscribed ability to link up observations andactions; and at the other, slow deliberative con-trollers which search in problem spaces in anopen-ended way. Biological control makes useof several controllers from different places inthe spectrum. How the controllers are organizedso as to make the best use of them is expressedas Heuristic 1.

Heuristic 1. Layer the modules such that thefaster, less robust modules are at the lower lev-els, and slower, more robust models are on topof them, overriding or augmenting the controlprovided by the lower level ones. This is illus-trated in Fig. 3.ADeliberationOoverrideDistributed neuralcontroloverrideReflex actionsThe three modules above are biologically motivated. Engineering systems do not need to be restricted to these three layers precisely.

Fig. 3. Layering of modules

Many control systems in engineering alreadyfollow this heuristic. For example, process en-gineering systems have a primary control layerthat directly activates certain important controlsbased on the value of some carefully chosen sen-sors. In nuclear power plants, the primary

cooling is activated instantly as soon as certaintemperatures exceed preset thresholds. In addi-tion, there are often additional controllers thatperform more complex control actions, some ofthem to augment the control actions of the lowerlevel modules. When humans are in the loop,they may intervene to override the lower levelcontrol actions as well. In general, emergencyconditions (say in controlling a pressure cookeror driving a car) will be handled by the lowerlevel modules. In the case of driving a car,making hypotheses about the intentions of otherdrivers or predicting the consequences of a routechange would require the involvement ofhigher-level modules performing more complexproblem solving.

In addition to overriding their controls as ap-propriate, the higher level models can influencethe lower level modules in another way. Theycan decompose the control task and pass on tothe lower modules control goals at a lower levelof abstraction that the lower-level modules canachieve more readily. For example, the delib-erative controller for robot control may take thecontrol goal of “boiling water” (say, the robot isexecuting an order to make coffee) and decom-pose it into control goals of “reaching the stove”and “turn the dials on (or off)”. These controlgoals can be achieved by motor control pro-grams by using the more compiled techniquessimilar to those in neural and reflex controls.

Real-time control

The next set of heuristics are important for thedesign of real-time control systems and arebased on ideas discussed in [Chandrasekaran, etal, 1991].

Control with a guarantee of real-time per-formance is impossible. Physical systems have,for all practical purposes, an unbounded de-scriptive complexity. Any set of measurementscan only convey limited information about thesystem to be controlled. This means that thebest model that any intelligence can build at anytime may be incomplete for the purpose of ac-tion generation. No action generation schemecan be guaranteed to achieve the goal within agiven time limit, whatever the time limit. Onthe other hand, there exist control schemes forwhich the more time there is to achieve the ac-tions, the higher the likelihood that actions canbe synthesized to achieve the control goals. All

of this leads to the conclusion that in the controlof physical systems, the time required to assurethat a control action will achieve the goal is un-bounded.

The discussion in the previous paragraphleads to two desiderata for any action generationscheme for real-time control. Desiderata:

• 1. For as large a range of goals and situa-tions as possible, actions need to be gener-ated rapidly and with a high likelihood ofsuccess. That is, we would like as much ofthe control as possible to be reactive.

• 2. Some provision needs to be made for

what to do when the actions fail to meet thegoals in the time available, as will inevita-bly happen sooner or later.

Desideratum 1 leads to the kind of modules atthe lower levels of the layering in Fig. 3. Thefollowing Heuristics 3 and 4 say more about howthe modules should be designed.

Heuristic 2. Design sensor systems such thatthe system to be controlled can be modeled asrapidly as possible.

As direct a mapping as possible should bemade from sensor values to internal states thatare related to important goals (especially threatsto important goals). Techniques in the spirit ofreflex or associative controls could be usefulhere. In fact, any technique whose knowledgecan be characterized as “compiled,” in the sensedescribed in [Chandrasekaran, 1991], would beappropriate. However, there is a natural limit tohow many of the situations can be covered inthis way without an undue proliferation of sen-sors. So only the most common and importantsituations can be covered this way.

Heuristic 3. Design action primitives suchthat mapping from models to actions can bemade as rapidly as possible.

Action primitives need to be designed suchthat they have as direct a relation as possible toachieving or maintaining the more importantgoals. A corresponding limit here is the prolif-eration of primitive actions.

Let us discuss Heuristics 2 and 3 in the con-text of some examples. In driving a car, thedesign of sensor and action systems has evolvedover time to help infer the most dangerous statesof the model or the most commonly occurringstates as quickly as possible, and to help take

immediate action. If the car has to be pulledover immediately because the engine is gettingtoo hot − and if this is a vital control action −install a sensor that recognizes this state di-rectly. On the other hand, we cannot have a sen-sor for every internal state of interest. For ex-ample, there is no direct sensor for a worn pis-ton ring. That condition has to be inferredthrough a diagnostic reasoning chain, usingsymptoms and other observations. Similarly, assoon as some dangerous state is detected, thecontrol action is to stop the car. Applying thebrake is the relevant action here, and cars aredesigned such that this is available as an actionprimitive. Again, there are limits on the numberof action primitives that can be provided. Forexample, the control action of increasing trac-tion does not have a direct control action asso-ciated with it. A plan has to be set in motioninvolving a number of other control actions. Desideratum 2 leads to the following heuristic.Heuristic 4. Real-control control requires aframework for goal-abandonment and substitu-tion. This requires as much pre-compilation ofgoals and their priority relations as possible. As we drive a car and note that the weather isgetting bad, we often decide that the originalgoal of getting to the destination by a certaintime is unlikely to be achieved. Or, in the con-trol of a nuclear power plant, the operator’s at-tempts to achieve the goal of producing maxi-mum power in the presence of some hardwarefailure might not be bearing fruit. In thesecases, the original goal is abandoned and sub-stituted by a less attractive but more achievablegoal. The driver of the car substitutes the goalof getting to the destination an hour later. Thepower plant operator abandons the goal of powerproduction, and instead pursues the goal of ra-diation containment.

How does the controller pick the new goal? Itcould spend its time reasoning about what goalsto substitute at the least cost, or it could spendthe time trying to achieve the new goal, what-ever it might be. In many important real-timecontrol problems replacement goals and theirpriorities can be pre-compiled. In the nuclearindustry, for example, a prioritized goal struc-ture called the safety function hierarchy ismade available in advance to the operators. Ifthe operator cannot maintain safe power pro-duction and decides to abandon the production

goal, the hierarchy gives him the appropriatenew goal. We acquire over time, as we interactwith the world, a number of such goal priorityrelations. In our everyday behavior, these rela-tions help us to navigate the physical world inclose to real time almost always. We occasion-ally have to stop and think about which goals tosubstitute, but not often.

Qualitative reasoning in prediction

The last set of heuristics that I will discusspertain to the problem of prediction. Prediction,as I discussed earlier, is a common subtask incontrol. Even if a controller is well-equippedwith a detailed quantitative model of the envi-ronment, the input to the prediction task may beonly qualitative1. Of course, the model itselfmay be partly or wholly qualitative as well. deKleer, Forbus and Kuipers have all proposedelements of a representational vocabulary forqualitative reasoning and associated semanticsfor the terms in it (see [Forbus, 1988] for a re-view of the ideas). The heuristics that I discussbelow can be viewed as elements of the prag-matics of qualitative reasoning for prediction. Whatever framework for qualitative reasoningone adopts, there will necessarily be ambiguitiesin the prediction due to lack of complete infor-mation. The ambiguities can proliferate expo-nentially.

How do humans fare in their control of thephysical world, in spite of the fact that qualita-tive reasoning is a veritable fountain of ambi-guities? I have outlined some of the ways inwhich we do this in [Chandrasekaran, 1992].The following simple example can be used toillustrate the ways in which we manage.

Suppose we want to predict the consequencesof throwing a ball on a wall. By using qualita-tive physical equations (or just commonsensephysical knowledge), we can derive a behavior 1

I am using the word “qualitative” in the senseof a symbol that stands for a range of actual val-ues, such as “increasing,” “decreasing,” or“Large.” It is a form of approximate reasoning.The literature on qualitative physics uses theword in this sense. This sense of “qualitative”should be distinguished from its use to stand for“symbolic” reasoning as opposed to numericalcalculation. The latter sense has no connotationof approximation.

tree with ever-increasing ambiguities. On theother hand, consider how human reasoningmight proceed.

1. If nothing much depends on it, we

just predict the first couple of bounces and thensimply say, “it will go on until it stops.”

2. If there is something valuable on the

floor that the bouncing ball might hit, we don’tagonize over whether the ball will hit it or not.We simply pick out this possibility as one thatimpacts a “Protect valuables” goal, and removethe valuable object (or decide against bouncingthe ball).

3. We may bounce the ball a couple of

times slowly to get a sense of the its elasticity,and use this information to prune some of theambiguities away. The key idea here is that weuse physical interaction as a way of makingchoices in the tree of future states.

4. We might have bounced the ball

before in the same room, and might know fromexperience that a significant possibility is that itwill roll under the bed. The next time the ball isbounced, this possibility can be predicted with-out going through the complex behavior tree.Further, using another such experience-basedcompilation, we can identify the ball gettingcrushed between the bed and the wall. Thispossibility is generated in two steps of predictivereasoning.

5. Suppose that there is a switch on the

wall that controls some device, and that we un-derstand how the device works. Using the ideain 2 above, we note that the ball might hit theswitch and turn it on and off. Then because wehave a functional understanding of the devicethat the switch controls, we will be able to makerapid predictions about what would happen tothe device. In some cases, we might even beable to make precise predictions by using avail-able quantitative models of the device. Thequalitative reasoning identifies the impact onthe switch of the device as a possibility, whichthen makes it possible for us to deploy addi-tional analytic resources on the prediction prob-lem in a highly focused way.

The above list is representative of what I meanby the pragmatics of qualitative reasoning,which are the ways in which we manage to con-trol the physical world well enough, in spite ofthe qualitativeness inherent in our reasoning. Infact, we exploit qualitativeness to reduce the

complexity of prediction (as in point 5 above).The list above leads to the following heuristics. Heuristic 5. Qualitative reasoning is rarelycarried out for more than a very small number ofsteps.

Heuristic 6. Ambiguities can often be re-solved in favor of nodes that correspond to “in-teresting” possibilities. Typically, interesting-ness is defined by threats to or supports for vari-ous goals of the agent.

Additional reasoning or other forms of verifi-cation may be used to check the occurrence ofthese states. Or actions might simply be taken toavoid or exploit these states.

Heuristic 7. Direct interaction with the physi-cal world can be used to reduce the ambiguitiesso that further steps in prediction can be made. Heuristic 7 is consistent with the proposals ofthe situated action paradigm in AI and cognitivescience.

Heuristic 8. The possibility that an actionmay lead to an important state of interest can becompiled from a previous reasoning experienceor stored from a previous interaction with theworld. This enables the states to be hypothe-sized without the agent having to navigate thebehavior tree generated from the more detailedphysical model.

Heuristic 9. Prediction often has to jumplevels of abstraction in behavior and state repre-sentation, since goals of interest occur at manydifferent levels of abstraction.

Compiled causal packages that relate statesand behaviors at different levels of abstractionare the focus of the work on Functional Repre-sentations, which is a theory of how devicesachieve their functionality as a result of thefunctions of the components and the structure ofthe device. Research on how to use this kind ofrepresentation for focused simulation and pre-diction is reviewed in [Chandrasekaran, 1994].

Concluding Remarks

The reader will note that, as promised by theuse of the term “Knowledge Level” in the title, Ihave avoided all discussion on specific repre-sentational formalisms. I have not gotten in-volved in the debates on fuzzy versus probabil-istic representations, linear control versus non-linear control, discrete versus continuous controland so on. All of these issues and technologies,important as they are, still pertain to the SymbolLevel of control systems. The Knowledge Level

discussion enabled us to get some idea bothabout what unifies the task of control, and aboutthe reasons for the vast differences in actualcontrol system design strategies. These differ-ences are due to the different constraints on thevarious substasks and different types of knowl-edge that are available. We can see an evolutionin intelligence from reflex controls which fix theconnection between observations and actionsthrough progressively decreasing rigidity ofconnection between observations and actions,culminating in individual or social deliberativebehavior which provides the most open-endedway of relating observations and actions. I alsodiscussed a number of biologically motivatedheuristics for the design of systems for control-ling the physical world and illustrated the rele-vance of these heuristics by looking at someexamples.

Acknowledgments

This research was supported partly by a grantfrom The Ohio State University Office of Re-search and College of Engineering for interdis-ciplinary research on intelligent control, andpartly by ARPA, contract F30602-93-C-0243,monitored by USAF Rome Laboratories. I thankthe participants in the interdisciplinary researchgroup meetings for useful discussions.

References

[Chandrasekaran, 1991] B. Chandrasekaran,\"Models vs rules, deep versus compiled, content

versus form: Some distinctions in knowledgesystems research,\" IEEE Expert, 6, 2, April1991, 75-79.

[Chandrasekaran, et al, 1991]

B. Chandrasekaran, R. Bhatnagar and D. D.Sharma,\" Real-time disturbance control,\"Communications of the ACM, August 1991, Vol.34, # 8, 33-47.

[Chandrasekaran, 1992] “QP is more thanSPQR and Dynamical systems theory: Responseto Sacks and Doyle,\" Computational Intelli-gence, 8(2), 1992, 216-222.

[Chandrasekaran, et al, 1992]

B. Chandrasekaran, Todd Johnson, Jack W.Smith, \"Task Structure Analysis for KnowledgeModeling,\" Communications of the ACM, 33-9,Sep, 1992, 124-136.

[Forbus, 1988] Forbus, K. D., 1988, QualitativePhysics: Past, Present and Future, in ExploringArtificial Intelligence, H. Shrobe, ed., SanMateo, CA, Morgan Kauffman, 239-96.[Laird, et al, 1987] Laird, J.E., Newell, A. &Rosenbloom, P.S. SOAR: An architecture forgeneral intelligence. Artificial Intelligence, 33,(1987), 1-64.

[Newell, 1981] Newell, A. The KnowledgeLevel. AI Magazine, Summer (1981), 1-19.

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- igat.cn 版权所有

违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务