Passport Improving Automated Formal Verification Using Identifiers ACM Transactions on Programming

Passport: Improving Automated Formal Verification Using Identifiers

This notification is added and sent: receive the notification when the selected record is stopped.

Click the button below and send it with your own notification option. Manage notifications

New Citation Alert!

  • Information and creators
  • Space} Vibrio Metrics and Quote
  • Optional display
  • Space} link
  • media
  • Space} table
  • Share

Abstract

Former test of system qualit y-one of the more effective ways to reinforce properties, but it is often a huge lane due to high demands for manual work. Research based on evidenc e-based evidenc e-based research has just begun to show its own ability to automate formal verification. These tools are effective and effective for the abundant data, including evidenc e-based companies. This feature is justified by the similarity of the style, and is not only held by the creator, but also a powerful logical system on the base of the association. However, most of the previous cases are focused on architectures and does not focus on how to apply certified data as much as possible, so this characteristics remain hard. 。 This comment asks how to apply "personal number", one of the properties of proof data, more effectively.

We have devised a passport layout, a prediction COQ model used in inventory for evidence integration, and a way to enhance three new mechanisms for stable coding. We evaluate the effects of enhancing layouts in three related tools: astactic, TAC, tok. Comparing the amount and the amount, passport mechanically justifies 29 % of these basic tools more effective. Grouping the three tools enhanced by passport mechanically justifies 38%more propositions than the group of three basic tools. Finally, if you combine these basic tools with their advanced versions, you can prove more 45 % propositions than the basic tools. Overall, our results state that the actual modeling of the individual can play an important role in improving evidence synthesis, which leads to the creation of higher quality software.

1 Introduction

Confirmation assistant support tests for software support are allocated to expensive, most likely, most likely, no n-secure mistakes, and actually leads to the creation of a more reliable software system. The target expert teams are already large and serious systems, for example, microlil [Klein et al. 2009], distributed system [Wilcox et al. 2015], compiler [Leroy 2009], and verified. This possibility is sold for the other 100th [Ringer et al. 2019] of the software system. These benefits have already had a major impact on the industry. For example, Airbus France uses CompCert's language compiler [Leroy 2009] to guarantee security and performance. Chrome and Android use a cryptographic code verified by COQ to guarantee communication security [Erbsen et al. 2019]. However, the entire potential of these confirmation assistants is still far from implementation. Because the price of tested software development and support remains the highest for experts [Ringer et al.]

In order to confirm the proposition of these confirmation assistants, the confirmation engineer usually reports a hig h-level strategy called a confirmation script, and the assistant is direction to confirm the subordinate verification with low confirmation objects. et al. 2019. In recent years, there has been a method of using machine learning to synthesize these confirmation scripts, which is [SANCHEZ-SAREN ET AL. 2020; First et al 2020; First and Brun 2022; Paliwal et al 2020]. These evidence integrated tools learn from companies with evidence and proposition to ensure the construction of a new proposition scenario. In particular, these tools use the search to establish the prognosis model of the evidence scenario, and then study the evidence scenario. In this process, assistant-confirmation is led by intelligence and consider the final victory.

In this note, we study how to improve these predictive models to optimally exploit the rich evidence-based data from which they learn. In particular, we focus on the modeling of person numbers, which are names that unmistakably identify axioms, data types, functions, types of types, and local variables. Previous tools for evidence synthesis dominated by machine learning either completely ignore the names of individual person numbers and only encode categorical information about them, or assign the original index to the combined person numbers and mark the rest as unknown, with no categorical information. In this commentary, we extend the models used in existing tools for evidence synthesis and develop a 3-way passport alignment from a new mechanism to stably encode them. We apply our alignment to a tool that synthesizes confirmations of assistant-evidence COQ [COQ Development Team 2021] and find that all these 3-way encodings improve the performance of the through tool.

The "passport approach" refers to our layout, which extends the models of existing evidence synthesis tools with support for person number information. The majority of our review has focused on the use of Passport on an existing tool called Tok [First et al. 2020]. If clarification is needed, we make a clear distinction between alignment and inventory, which is the result of refining the existing TOK model to accommodate our scenario.

Personal numbers in the passport The layout of the passport encodes personal numbers supporting three types of encoding devices (detailed in segments 3 and 4):

Index dictionary of categories: for each personal number, we encode an indication of which category it belongs to (global definition, local variable, designer type). For the more popular personal numbers in each category, we encode an index that is the correct name. That is, each personal number is assigned an original mark that links to all other uses of this personal number.

Modeling the sequence of sublines: for every personal number, we use a model of a set of subsidiaries to bridge between related names. That is, the personal number is split into joint parts of the text and processed with the support of a precedence model.

Decoding steers: for designers of types and mass definitions, we encode their full path (the name of the catalog, file, module in which they reside).

However, this note is targeted, but this method is Lean [Lean Development Team 2021], IBELLE/HOL, AGDA [AGDA Development Team 2021]. , To other confirmation assistants It should also be applied.

Evaluate passport alignment with the support of benchmark coqgym [Yang and Deng 2019]. We compared the three existing searc h-based evidence synthesis tools, ASTACTIC [Yang and Deng 2019], TAC, TOK [First et al. 2020]. In fact, in a project that can completely justify more propositions, all three of our coding mechanisms show that they show better tool performance. For example, when a staple study is added, 12, 6 % of the theorem is confirmed. We also measured the impact of the personal number information inflation on one of the personal number categories, and pointed out that the passport alignment was actually useful for them.

Along with the previous three tools, the tools improved by the support of the passport layout are ready to completely justify 1. 820 out of the 10. 782 theorem in our reference test set. This is a 45%proved a theorem than past research.

The main contributions of our research are correct:

The passport (section 4) is composed of a series of technologies for encoding personal numbers in the context of the assistant being confirmed.

The implementation of this layout is a passport as another tool in the framework of the existing evidence synthesis framework. Passport contains sincere start code: https: // gitHub. Com/lasser-umass/passport.

In the evaluation (section 5), (1) Improving the evidence synthesis when the passport alignment uses up to 3 to 3, and (2) each device that encodes the personal number is literally modeled. It indicates that improving the performance of evidence synthesis, and (3) encoding each category of the personal number separately will be improved than the encoding.

A promising consideration for tasks encountered during passport development (for comparing with automation of the tuning evidence) and conclusions that are likely to be obtained from these tasks (section 6). Our reviews are linked to the experience of measuring the effects of no n-measurement research (section 5. 6).

2 Background On Proofs and Proof Synthesis

In order to confirm the coq, evidence that starts with evidence begins with the formula of axiom, which must be justified. After this, we will report that this axical contains a space. Each axiom of Coq is a definition of similarity described in abundant type systems, and the Write Confirmation of the Coq is combined with searching a terminated terms. 1

However, it is difficult to make direct arrangements to solve this, and as a result, the settler s-engineers will confirm in dialogue mode with coq support. In each step, the engineer dispatches a hig h-level strategy called Co q-strategy according to evidence, and coq fulfills his current promises to confirm the implementation of each strategy. Each strategy is to search for a type of term that is indicated by COQ, and clarify the situation until no new promise is visible. At this point, the confirmation engineer wrote the priority of a tactician called a confirmation scenario (as shown in Fig. 3 (a)), and the coq is the term or target of the confirmation with the specified type. I built it. The language of the confirmation scenario in the COQ is called LTAC, and the language of the confirmation of the confirmation, and the program and definition are called Gallina.

In recent years, evidence integrated tools by machine learning have been developed, and the suppression of evidence has been simplified, mechanically supported scenarios, and in exchange for users to put them together. Most tools have similar components and structures without paying attention to these tools may be different. < SPAN> Evidence that starts with evidence to check the coq begins with the formula of the axiom, which must be justified. After this, we will report that this axical contains a space. Each axiom of Coq is a definition of similarity described in abundant type systems, and the Write Confirmation of the Coq is combined with searching a terminated terms. 1

However, it is difficult to make direct arrangements to solve this, and as a result, the settler s-engineers will confirm in dialogue mode with coq support. In each step, the engineer dispatches a hig h-level strategy called Co q-strategy according to evidence, and coq fulfills his current promises to confirm the implementation of each strategy. Each strategy is to search for a type of term that is indicated by COQ, and clarify the situation until no new promise is visible. At this point, the confirmation engineer wrote the priority of a tactician called a confirmation scenario (as shown in Fig. 3 (a)), and the coq is the term or target of the confirmation with the specified type. I built it. The language of the confirmation scenario in the COQ is called LTAC, and the language of the confirmation of the confirmation, and the program and definition are called Gallina.

In recent years, evidence integrated tools by machine learning have been developed, and the suppression of evidence has been simplified, mechanically supported scenarios, and in exchange for users to put them together. Most tools have similar components and structures without paying attention to these tools may be different. In order to confirm the coq, evidence that starts with evidence begins with the formula of axiom, which must be justified. After this, we will report that this axical contains a space. Each axiom of Coq is a definition of similarity described in abundant type systems, and the Write Confirmation of the Coq is combined with searching a terminated terms. 1

However, it is difficult to make direct arrangements to solve this, and as a result, the settler s-engineers will confirm in dialogue mode with coq support. In each step, the engineer dispatches a hig h-level strategy called Co q-strategy according to evidence, and coq fulfills his current promises to confirm the implementation of each strategy. Each strategy is to search for a type of term that is indicated by COQ, and clarify the situation until no new promise is visible. At this point, the confirmation engineer wrote the priority of a tactician called a confirmation scenario (as shown in Fig. 3 (a)), and the coq is the term or target of the confirmation with the specified type. I built it. The language of the confirmation scenario in the COQ is called LTAC, and the language of the confirmation of the confirmation, and the program and definition are called Gallina.

In recent years, evidence integrated tools by machine learning have been developed, and the suppression of evidence has been simplified, mechanically supported scenarios, and in exchange for users to put them together. Most tools have similar components and structures without paying attention to these tools may be different.

Figure 1 shows the general architecture of most machine learning-driven evidence synthesis tools. The tool's database contains prediction models that guide the confirmation information, generating predictions of tacticians or candidates for the correct strategy at each step. Each prediction model takes as input a set of information about the confirmation state or scenario and generates a set of candidate tactics. The tool uses the prediction model to predict one or more possible initial tactics and uses the confirmation assistant to provide feedback on these strategies (e. g., rejecting strategies that lead to errors or that do not change the location of the evidence). The tool then examines likely locations of evidence while applying the prediction model to predict the correct strategy and applying the confirmation assistant to get feedback or to abort the search. As a result, the accuracy of the prediction model is considered crucial to the chances of success of the search procedure, and for the model to have a chance of being accurate, it needs to effectively integrate the current confirmation location and apply it to the missing person case. Passport classification works by enhancing the properties of the prediction model, which results in an optimal disclosure of the evidence location.

US. 1. System configuration of evidence synthesis tool with machine learning and prediction.

3 Overview of the Passport Approach

ASTactic and TacTok. The architecture of the Tactical Passport model adopts the design adopted in the ASTactic model [Yang and Deng 2019] to encode confirmation promises, and the TacTok model [First et al. 2020] to encode confirmation scripts.

Confirmation promises consist of the goal to be justified, the local context, and the environment. Each confirmation state term contains a base representation in the form of an abstract syntax tree (AST). ASTactic serializes these ASTs and encodes them using a TreeLSTM [Tai et al. 2015] [Yang and Deng 2019]. The TacTok model uses this encoding for confirmation statuses.

Confirmation scripts are derived from a sequence of tokens in the Ltac language. Before encoding these tokens, each confirmation script undergoes preprocessing to remove low-signal, high-frequency tokens such as punctuation. The TacTok model uses a bidirectional LSTM [Peters et al. 2018] to encode specific vocabulary strings [First et al. 2020].

The ASTactic and tactok models teach the method of controlled study with a set of human-written evidence, and push the next stage of confirmation (tactics and arguments) into incomplete confirmation. The restricted generative tree grammar model of Astactic strategies [Yang and Deng 2019] prepares these further predictions. However, for the first acceptance of the axioms, you have the opportunity to predict a large number of correct evidences, and since the appropriate one considers another strategy or confirmation, there is no alternative way to be considered, and as a result, the model learns to mimic human-written confirmations.

The attached state is made up of Galina's multiple definitions. Properly modeling these definitions is a source for making clear models. However, in previous models, coding the personal number in general missed most of the necessary information about the personal number contained in the definition. Since COQ evidence contains a wealth of information about the personal number, it is quite fundamental that coding the personal number is not problematic. One of the reasons why personal numbers are more important in CoQ is that CoQ does not have a simple data type. These names can be very important, and this value can be reflected in the names of propositions that refer to them. In this note, we describe and evaluate an improvement to the encoding of personal numbers in the tactical prophecy model.

Personal Number Categories

To apply hidden information in personal numbers, the Passport Alignment adds three categories to the definition model of personal numbers. To understand these personal number categories, we refer to the definitions shown in Figure 2 from a tested cryptographic library.

Rice. 2. Definitions related to the type posnat - a type of pair of natural quantities and a proof that they are greater than or equal to zero. These definitions were incorporated into the Foundational Cryptography Framework, 2 as a shared part of the verified software-toolchain. 3

POSNAT-IDs are mass definitions (red 1) with the ability to apply data types, functions, axioms, or verification scenarios to link to a global specific image of POSNAT-data.

For example, the identifier n is considered a local variable (isolated in orange 2), since it can be referenced in the local context of this term, but not beyond it.

The ID of Posnateq_intro is considered a designer type (marked with a yellow 3). For example, you can reference data types, functions, axioms, and validation scenarios for creating new Posnateq objects.

Appendix A explains the category of these identification data (global definition, local variable, component name) in more detail, and shows the reason why each category is useful for encoding tactical prediction models. The appendix A. 4 describes the implementation work required to enhance the model with the identification data of these three categories.

Encoding Figure 3 is a proof of these definitions Posnatmult_comm. This proof means that Posnat multiplication is a variation, and even if the order of the argument is changed, the result is always the same. In order to proceed with this proof, it is necessary to understand a little about the identifier.

Rice. 3. Proof using the definition of Figure 2 of the same file.

The type constituent EXIST is a general constituent of the sigma (existing) type, and has a special tactic to infer these objects (EXIST, EEXISTS, etc.).

The target type Posnateq is related to the position and equation.

The nat. mul function is defined in the COQ standard library, and Mult_GT_0 is a proposition that is defined in the current project.

To understand these things, you need three different approaches: link to a general identifier, and understand where the individual is related to different concepts. It is to remember where to do and where the referenced definitions are defined.

The essence of this paper is to enhance the COQ evidence synthesis model with abundant identification information. Figure 4 shows how the passport approach encodes identification data. In order to make the most of the abundance of this identification data, our project mainly uses three encoding mechanisms:

Rice. 4 Architecture for processing identification data with a passport.

4 Passport Encodings

Category Vocabulary Indexing (Section 4,

Subwarded character string modeling (section 4.

4.1 Category Vocabulary Indexing

"Details of the path" (section 4. < SPAN> Appendix a (SPAN> Appendix a) is a category of these identification data (global definition, local variable, components, component names, component names, components. ) In addition, the reason why each category is useful for encoding the tactical prediction model is an example, to enhance the model with these three categories. Explain the necessary implementation work.

Encoding Figure 3 is a proof of these definitions Posnatmult_comm. This proof means that Posnat multiplication is a variation, and even if the order of the argument is changed, the result is always the same. In order to proceed with this proof, it is necessary to understand a little about the identifier.

Rice. 3. Proof using the definition of Figure 2 of the same file.

The type constituent EXIST is a general constituent of the sigma (existing) type, and has a special tactic to infer these objects (EXIST, EEXISTS, etc.).

The target type Posnateq is related to the position and equation.

4.2 Subword Sequence Modeling

The nat. mul function is defined in the COQ standard library, and Mult_GT_0 is a proposition that is defined in the current project.

To understand these things, you need three different approaches: link to a general identifier, and understand where the individual is related to different concepts. It is to remember where to do and where the referenced definitions are defined.

The essence of this paper is to enhance the COQ evidence synthesis model with abundant identification information. Figure 4 shows how the passport approach encodes identification data. In order to make the most of the abundance of this identification data, our project mainly uses three encoding mechanisms:

Rice. 4 Architecture for processing identification data with a passport.

Category Vocabulary Indexing (Section 4,

avatar-logo

Elim Poon - Journalist, Creative Writer

Last modified: 27.08.2024

Passport: Improving Automated Formal Verification Using Identifiers. Authors. Alex Sanchez-Stern · ORCID ID · Emily First · ORCID ID · Timothy Zhou · ORCID ID. Program verification Software evolution Software notations and tools Passport: Improving Automated Formal Verification Using Identifiers · Author. Passport: Improving Automated Formal Verification Using Identifiers. ACM Transactions on Programming Languages and Systems (TOPLAS), 45(2).

Play for real with EXCLUSIVE BONUSES
Play
enaccepted