CSC 320: Summer 2017
Assignment #4, Due at beginning of class, Friday July. 14, 2017
Instructions for all assignments:
Draw boxes
for your marks on the top of the first page of your submission.
Place a 0 in the corresponding box for any
questions you omit. For this assignment:
Question | 1 | 2 | 3 | 4 | 5 |
6 |
7 |
8 |
9 |
10 |
Marks | 0 | 0 | 0 | 0 | 0 |
0 |
0 |
0 |
0 |
0 |
Questions should be in order.
Show your work unless otherwise stated.
Please put your name on your assignment.
The purpose of the first question is for you to
experience the strange situation of a program
taking itself as its own input (self-reference).
-
(a)
[5]
Write a program either in C or JAVA.
The program should read from the standard input
and write to standard output.
It should count the number of times that a lower
case letter a occurs in the input.
If the number of occurrences is prime, the program
should print
YES- The number of a's is X which is prime.
where X is replaced by your count of the number of a's.
If the number of occurrences is composite, the program
should print
NO- The number of a's is X which is divisible by Y.
where X is replaced by your count of the number of a's,
and Y is a divisor of X (1 < Y < X).
(b)
[5]
Test your program using the program itself as input and
hand in both your program listing and the output that
it gives. Use a pen to highlight all the a's in your
program so that your output can be more easily checked.
To run the program on itself on a unix type of system:
With a C program count.c:
gcc count.c
a.out < count.c > out
A java program:
javac Count.java
java Count < Count.java > out
You can also use the < to get input from a file
instead of standard input and > to put the output
to a file instead of standard output using a
PC. Please ask for help if you do not know how to do it.
-
Consider the following rightmost derivation:
S => T B C => T B S C => T B S b => T B a b b => T a B d a b b =>
T a d a b b => A a d a b b => T a d a b b => c c a d a b b
(a) [4] Draw the parse tree that corresponds to this derivation.
(b) [3] Use your parse tree from (a) to find a leftmost derivation.
(c) [3] Determine all possible viable choices for u, v, x, y, z as per the pumping theorem.
-
Consider the following context-free grammar:
G=({S}, {a,b}, A, S) where the rules in A are:
-
S -> e
-
S-> a S a
-
S-> a S b
-
S-> b S a
-
S-> a a S a a
-
S-> a a S b b
(a)
[2]
List the strings of length at most 6 generated by this grammar.
(b)
[2]
Give a formula for the number of strings of length equal to n=2k
generated by this grammar.
(c)
[6]
Prove your formula is correct by induction.
For full marks, your proof must talk about the grammar
and what happens during the generation of a string.
You must indicate where you apply the induction
hypothesis. A stream of strictly algebra will get you
zero marks.
-
Design PDA's that accept these languages.
Make sure to include lots of comments explaining what your
PDA's are doing.
(a) [5] L= { w in {a, b, c}* : w either has the same number of a's and b's or
w has twice as many c's as a's (or satisfies both)}
(b) [5] L= { w in {0, 1}* : w = u u R v vR 1k : k >= 2}
-
[10]
Our definition of a regular expression over the alphabet {0, 1} is that
[Basis] ϕ, 0, and 1 are regular expressions.
[Inductive step] If α and β are regular expressions then so are
(i) (α ∪ β )
(ii) (α β ) and
(iii) α *
Let L be the language consisting
of all strings over the alphabet
{0, 1, (, ), ∪, *, ϕ}.
that represent regular expressions
over the alphabet {0, 1}.
Starting with a context-free grammar for generating
L, construct a PDA that accepts L that mimics derivations in the grammar.
Important: use the definition for a regular expression instead of considering regular
expressions that have the ) and ( symbols omitted when not needed for precedence.
-
An empty-stack PDA accepts a string w if there
exists some computation
that starts in the start state, applies legal transitions
at each step and terminates in a final state with
all the input consumed, and an empty stack.
For an empty-stack
PDA M=(K, Σ , Γ , Δ , s, F)
L(M) = {w in Σ * : (s, w, e)
yields in 0 or more steps (f, e, e) for some f in F}.
This is our standard model of a PDA.
A stack-oblivious PDA accepts a string w if there
exists some computation
that starts in the start state, applies legal transitions
at each step and terminates in a final state with
all the input consumed, and it does not matter if
there is still something in the stack or not.
For a stack-oblivious PDA
M=(K, Σ , Γ , Δ, s, F)
L(M) = {w in Σ * : (s, w, e)
yields in 0 or more steps (f, e, u) for some f in F
and u in Γ *}.
Consider the following
PDA M:
Start state: p
Final states: {q}
Transitions:
State | Read | Pop | Next State | Push |
p | a | e | p | 1 |
p | a | e | p | 2 |
p | e | e | q | e |
q | b | 1 | q | e |
q | b | 2 | q | 1 |
(a)
[2] If M is an empty-stack PDA, what language does it accept?
(b)
[2] If M is a stack-oblivious PDA, what language does it accept?
(c)
[4] Describe a construction that will convert an arbitrary
stack-oblivious PDA
M=(K, Σ , Γ , Δ, s, F)
to an empty-stack PDA
M' =(K', Σ ', Γ', Δ', s', F')
such that both PDA's accept the same language.
(d)
[2] Apply your construction
from (c) to the stack-oblivious PDA M
from (b)
to create an equivalent empty-stack PDA M'.
-
The goal of this question is to prove that context-free
languages are not closed under intersection.
Consider the following two languages that are context-free:
L1= {
ap
bq
ar
where p, q, r >= 0
and
p <= q <= 3 * p}
L2= {
ap
bq
ar
where p, q, r > 0
and
r > p}
(a) [1] Describe the language L= L1 intersect L2.
(b) [8] Prove that L is not context-free using the pumping theorem.
(c) [1] What does this tell you about closure of context-free
languages under intersection (explain fully)?
For parts (c) and (d) for the next three questions,
hand in the output from the
TM simulator
(run it using java TM not java JTM to get printable output).
In order to make question 8 easier to do, make sure you use
different names for your states for questions 6 and 7 to
facilitate combining them for question 8.
Also, follow the functional specifications for the
TM's exactly.
-
Design a TM M1
which on an input w in {a, b}*,
halts with the configuration (h, # w # Y [#])
when w is in
L1 = a*b*a*
and it halts instead with
the configuration (h, # w # N [#])
when w is not in
L1
That is, the TM M1 preserves its input
and indicates language membership by writing Y or N
to the right of the input.
(a) [2] Give pseudo code for the algorithm you will implement.
(b) [2] Draw a machine schema for your algorithm.
(c) [3] Give explicit TM instructions which implement your
algorithm (with lots of comments).
(d) [3] Test it on several examples.
-
Design a TM M2
which given an input w in {a, b}* decides if
w is in L2= {w in {a, b}* : w has the same
number of a's, and b's}.
That is, on an input w in {a, b}*, the TM starts
with a configuration (s, # w [#]) and it halts with
(h, # Y [#]) when w is in
L2
and halts with
(h, # N [#]) when w is not in L2.
(a) [2] Give pseudo code for the algorithm you will implement.
(b) [2] Draw a machine schema for your algorithm.
(c) [3] Give explicit TM instructions which implement your
algorithm (with lots of comments).
(d) [3] Test it on several examples.
-
Combine your TM's from Questions 3 and 4 to create a TM
M3 which decides the language
{ar bn a n-r: n >= r}.
Note: you'll also need some code to clean up the tape
for the inputs which are not of the form a*b*a*.
(a) [2] Give pseudo code for the algorithm you will implement.
You don't need to restate the details of your
algorithms from the two previous questions, just refer
to them as black boxes (code that meets the functional specifications
of the previous questions without considering any implementation details).
(b) [2] Draw a machine schema for your algorithm.
Use M1 and M2
from questions 3 and 4 in your machine schema.
(c) [3] Give explicit TM instructions which implement your
algorithm (with lots of comments).
(d) [3] Test it on several examples.